Test Report: Docker_Linux_crio 21139

                    
                      c4345f2baa4ca80c4898fac9368be2207cfcb3f0:2025-11-09:42265
                    
                

Test fail (37/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.24
35 TestAddons/parallel/Registry 13.61
36 TestAddons/parallel/RegistryCreds 0.38
37 TestAddons/parallel/Ingress 148.93
38 TestAddons/parallel/InspektorGadget 5.23
39 TestAddons/parallel/MetricsServer 6.3
41 TestAddons/parallel/CSI 52.71
42 TestAddons/parallel/Headlamp 2.34
43 TestAddons/parallel/CloudSpanner 5.24
44 TestAddons/parallel/LocalPath 8.07
45 TestAddons/parallel/NvidiaDevicePlugin 6.24
46 TestAddons/parallel/Yakd 5.23
47 TestAddons/parallel/AmdGpuDevicePlugin 5.24
97 TestFunctional/parallel/ServiceCmdConnect 602.65
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.04
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.62
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.49
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.28
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.34
143 TestFunctional/parallel/ServiceCmd/DeployApp 600.56
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
153 TestFunctional/parallel/ServiceCmd/Format 0.51
154 TestFunctional/parallel/ServiceCmd/URL 0.51
191 TestJSONOutput/pause/Command 2.37
197 TestJSONOutput/unpause/Command 1.93
295 TestPause/serial/Pause 5.69
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2
302 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.21
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.98
318 TestStartStop/group/old-k8s-version/serial/Pause 5.97
324 TestStartStop/group/no-preload/serial/Pause 5.4
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.98
338 TestStartStop/group/newest-cni/serial/Pause 6.09
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.37
345 TestStartStop/group/embed-certs/serial/Pause 6.98
360 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.06
x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-762402 addons disable volcano --alsologtostderr -v=1: exit status 11 (236.224472ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:30:54.124635   18591 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:30:54.124775   18591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:30:54.124783   18591 out.go:374] Setting ErrFile to fd 2...
	I1109 13:30:54.124787   18591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:30:54.124944   18591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:30:54.125163   18591 mustload.go:66] Loading cluster: addons-762402
	I1109 13:30:54.125448   18591 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:30:54.125461   18591 addons.go:607] checking whether the cluster is paused
	I1109 13:30:54.125538   18591 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:30:54.125549   18591 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:30:54.125956   18591 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:30:54.145137   18591 ssh_runner.go:195] Run: systemctl --version
	I1109 13:30:54.145177   18591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:30:54.162236   18591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:30:54.252547   18591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:30:54.252608   18591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:30:54.279312   18591 cri.go:89] found id: "af27443b9f8960f215197598df9b91ffdc8840cc6699e13907ef2947a7f00d7f"
	I1109 13:30:54.279338   18591 cri.go:89] found id: "faa104590cba322dc0a5b9cf53d37627283cf55d3c3e01f2df4160549024ba44"
	I1109 13:30:54.279341   18591 cri.go:89] found id: "4e995ec9dee843135f1c0d0f6013b3b611a0904bb3f4e1d9e3a32292ddec2484"
	I1109 13:30:54.279345   18591 cri.go:89] found id: "b930aa6a1203014deddb934b10a09a6c0fe15230dc625851f1f8af6995f0c88a"
	I1109 13:30:54.279348   18591 cri.go:89] found id: "d67da63b5ee916610fc4244764266a69e975404d12653c2d5c53579883b8f1c1"
	I1109 13:30:54.279352   18591 cri.go:89] found id: "0aaed23bb5d29a8452cd11f247803cb291c459fb6a6e4e209ef99d928f631144"
	I1109 13:30:54.279354   18591 cri.go:89] found id: "08f89d0732d380e6bf1e4981ab1ec47b039f172745c2906fcf611652aaf44a3c"
	I1109 13:30:54.279357   18591 cri.go:89] found id: "2afddde1486e2dfe821e2e23866ced6e5b833b9bc83463adad53a55d1941cd8e"
	I1109 13:30:54.279359   18591 cri.go:89] found id: "c2e8f6876246e31a51938f36acf9e6262ba61dc5526f4f4a5e8789cd851d46cf"
	I1109 13:30:54.279368   18591 cri.go:89] found id: "c947eeaaf49bfe6ebb73d36bcfba2b027085814dd9abc647f0e0692d4b65939e"
	I1109 13:30:54.279371   18591 cri.go:89] found id: "3c9fabcff63aa51893e804edf796bca0360c29c53e840636c0db34b4037bde26"
	I1109 13:30:54.279374   18591 cri.go:89] found id: "057fd0d666013bd4f0e02f4749164afa0a5d0906ec2754e1278f051e29dbe2aa"
	I1109 13:30:54.279376   18591 cri.go:89] found id: "5e49dd8732922347f346f66fdc4fee803771be2e5ef0d4d345d966b70ab95d61"
	I1109 13:30:54.279379   18591 cri.go:89] found id: "403c426d7512024ca15621249cc490aed2d29fd9c26fd69e34ee8a4865f6ae84"
	I1109 13:30:54.279381   18591 cri.go:89] found id: "8c332e600a86f5db4cbaefa708b6b131d75d12b1625fc751633649df36046ad6"
	I1109 13:30:54.279391   18591 cri.go:89] found id: "e93137eb9f50682474df733b717841dc665886c5322e88ebbcc5f7daeff9c7f6"
	I1109 13:30:54.279398   18591 cri.go:89] found id: "befdac5dae601c24846eafd867c1ffdaf970739dfdb6544290f7d401fb479adb"
	I1109 13:30:54.279402   18591 cri.go:89] found id: "62effce4d440574a561ee8bb4713a143cff8c4dd5102c8837cbcb323a55c6f8e"
	I1109 13:30:54.279404   18591 cri.go:89] found id: "79692b5ce1377fdc3438f677558138b88bf7d705bda4ddca28079953b176faf6"
	I1109 13:30:54.279407   18591 cri.go:89] found id: "5af868c65929f995aa32fd9d978e8b01f76671c795d79d2fdd5e8b5636497bb7"
	I1109 13:30:54.279412   18591 cri.go:89] found id: "5bb7efe058cec23686f0732d66f5c759521ce0362ae0dac95c722fbe50e4ea21"
	I1109 13:30:54.279414   18591 cri.go:89] found id: "861090a7ec88133657d8ba74914277d902ce94237d6ff1f5d419ebee604b79da"
	I1109 13:30:54.279416   18591 cri.go:89] found id: "790087032ffe1bf767edeef5915a65edb0e4e1d93fd292a3746fd7e1aeb43138"
	I1109 13:30:54.279419   18591 cri.go:89] found id: "09ed3ea08406406e4a9fa9f5b1bf6bfff6a0edd9133d05145b78a59503d5f47b"
	I1109 13:30:54.279421   18591 cri.go:89] found id: ""
	I1109 13:30:54.279458   18591 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:30:54.292696   18591 out.go:203] 
	W1109 13:30:54.293852   18591 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:30:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:30:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:30:54.293871   18591 out.go:285] * 
	* 
	W1109 13:30:54.296851   18591 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:30:54.298032   18591 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-762402 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 2.932851ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-xvmzk" [c2c01b58-6e2b-43ac-a6be-0f2a6af3bf46] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002161806s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-z7stg" [3ff06e00-5ad2-47cb-afd2-0c1ef8f5dc44] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002774574s
addons_test.go:392: (dbg) Run:  kubectl --context addons-762402 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-762402 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-762402 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.188855972s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 ip
2025/11/09 13:31:15 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-762402 addons disable registry --alsologtostderr -v=1: exit status 11 (229.931147ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:31:15.476942   21456 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:31:15.477102   21456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:15.477113   21456 out.go:374] Setting ErrFile to fd 2...
	I1109 13:31:15.477117   21456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:15.477301   21456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:31:15.477515   21456 mustload.go:66] Loading cluster: addons-762402
	I1109 13:31:15.477841   21456 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:15.477855   21456 addons.go:607] checking whether the cluster is paused
	I1109 13:31:15.477934   21456 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:15.477945   21456 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:31:15.478278   21456 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:31:15.496606   21456 ssh_runner.go:195] Run: systemctl --version
	I1109 13:31:15.496653   21456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:31:15.512706   21456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:31:15.602332   21456 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:31:15.602412   21456 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:31:15.632435   21456 cri.go:89] found id: "af27443b9f8960f215197598df9b91ffdc8840cc6699e13907ef2947a7f00d7f"
	I1109 13:31:15.632470   21456 cri.go:89] found id: "faa104590cba322dc0a5b9cf53d37627283cf55d3c3e01f2df4160549024ba44"
	I1109 13:31:15.632477   21456 cri.go:89] found id: "4e995ec9dee843135f1c0d0f6013b3b611a0904bb3f4e1d9e3a32292ddec2484"
	I1109 13:31:15.632482   21456 cri.go:89] found id: "b930aa6a1203014deddb934b10a09a6c0fe15230dc625851f1f8af6995f0c88a"
	I1109 13:31:15.632487   21456 cri.go:89] found id: "d67da63b5ee916610fc4244764266a69e975404d12653c2d5c53579883b8f1c1"
	I1109 13:31:15.632492   21456 cri.go:89] found id: "0aaed23bb5d29a8452cd11f247803cb291c459fb6a6e4e209ef99d928f631144"
	I1109 13:31:15.632496   21456 cri.go:89] found id: "08f89d0732d380e6bf1e4981ab1ec47b039f172745c2906fcf611652aaf44a3c"
	I1109 13:31:15.632499   21456 cri.go:89] found id: "2afddde1486e2dfe821e2e23866ced6e5b833b9bc83463adad53a55d1941cd8e"
	I1109 13:31:15.632502   21456 cri.go:89] found id: "c2e8f6876246e31a51938f36acf9e6262ba61dc5526f4f4a5e8789cd851d46cf"
	I1109 13:31:15.632513   21456 cri.go:89] found id: "c947eeaaf49bfe6ebb73d36bcfba2b027085814dd9abc647f0e0692d4b65939e"
	I1109 13:31:15.632519   21456 cri.go:89] found id: "3c9fabcff63aa51893e804edf796bca0360c29c53e840636c0db34b4037bde26"
	I1109 13:31:15.632522   21456 cri.go:89] found id: "057fd0d666013bd4f0e02f4749164afa0a5d0906ec2754e1278f051e29dbe2aa"
	I1109 13:31:15.632524   21456 cri.go:89] found id: "5e49dd8732922347f346f66fdc4fee803771be2e5ef0d4d345d966b70ab95d61"
	I1109 13:31:15.632527   21456 cri.go:89] found id: "403c426d7512024ca15621249cc490aed2d29fd9c26fd69e34ee8a4865f6ae84"
	I1109 13:31:15.632530   21456 cri.go:89] found id: "8c332e600a86f5db4cbaefa708b6b131d75d12b1625fc751633649df36046ad6"
	I1109 13:31:15.632540   21456 cri.go:89] found id: "e93137eb9f50682474df733b717841dc665886c5322e88ebbcc5f7daeff9c7f6"
	I1109 13:31:15.632548   21456 cri.go:89] found id: "befdac5dae601c24846eafd867c1ffdaf970739dfdb6544290f7d401fb479adb"
	I1109 13:31:15.632554   21456 cri.go:89] found id: "62effce4d440574a561ee8bb4713a143cff8c4dd5102c8837cbcb323a55c6f8e"
	I1109 13:31:15.632558   21456 cri.go:89] found id: "79692b5ce1377fdc3438f677558138b88bf7d705bda4ddca28079953b176faf6"
	I1109 13:31:15.632562   21456 cri.go:89] found id: "5af868c65929f995aa32fd9d978e8b01f76671c795d79d2fdd5e8b5636497bb7"
	I1109 13:31:15.632571   21456 cri.go:89] found id: "5bb7efe058cec23686f0732d66f5c759521ce0362ae0dac95c722fbe50e4ea21"
	I1109 13:31:15.632578   21456 cri.go:89] found id: "861090a7ec88133657d8ba74914277d902ce94237d6ff1f5d419ebee604b79da"
	I1109 13:31:15.632582   21456 cri.go:89] found id: "790087032ffe1bf767edeef5915a65edb0e4e1d93fd292a3746fd7e1aeb43138"
	I1109 13:31:15.632586   21456 cri.go:89] found id: "09ed3ea08406406e4a9fa9f5b1bf6bfff6a0edd9133d05145b78a59503d5f47b"
	I1109 13:31:15.632590   21456 cri.go:89] found id: ""
	I1109 13:31:15.632654   21456 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:31:15.646522   21456 out.go:203] 
	W1109 13:31:15.647763   21456 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:31:15.647777   21456 out.go:285] * 
	* 
	W1109 13:31:15.650758   21456 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:31:15.651780   21456 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-762402 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.61s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.38s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.89137ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-762402
addons_test.go:332: (dbg) Run:  kubectl --context addons-762402 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-762402 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (226.231448ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:31:15.859058   21565 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:31:15.859190   21565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:15.859199   21565 out.go:374] Setting ErrFile to fd 2...
	I1109 13:31:15.859204   21565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:15.859397   21565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:31:15.859637   21565 mustload.go:66] Loading cluster: addons-762402
	I1109 13:31:15.859936   21565 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:15.859950   21565 addons.go:607] checking whether the cluster is paused
	I1109 13:31:15.860037   21565 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:15.860055   21565 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:31:15.860416   21565 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:31:15.877552   21565 ssh_runner.go:195] Run: systemctl --version
	I1109 13:31:15.877601   21565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:31:15.894630   21565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:31:15.985360   21565 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:31:15.985428   21565 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:31:16.012562   21565 cri.go:89] found id: "af27443b9f8960f215197598df9b91ffdc8840cc6699e13907ef2947a7f00d7f"
	I1109 13:31:16.012589   21565 cri.go:89] found id: "faa104590cba322dc0a5b9cf53d37627283cf55d3c3e01f2df4160549024ba44"
	I1109 13:31:16.012594   21565 cri.go:89] found id: "4e995ec9dee843135f1c0d0f6013b3b611a0904bb3f4e1d9e3a32292ddec2484"
	I1109 13:31:16.012599   21565 cri.go:89] found id: "b930aa6a1203014deddb934b10a09a6c0fe15230dc625851f1f8af6995f0c88a"
	I1109 13:31:16.012603   21565 cri.go:89] found id: "d67da63b5ee916610fc4244764266a69e975404d12653c2d5c53579883b8f1c1"
	I1109 13:31:16.012608   21565 cri.go:89] found id: "0aaed23bb5d29a8452cd11f247803cb291c459fb6a6e4e209ef99d928f631144"
	I1109 13:31:16.012612   21565 cri.go:89] found id: "08f89d0732d380e6bf1e4981ab1ec47b039f172745c2906fcf611652aaf44a3c"
	I1109 13:31:16.012631   21565 cri.go:89] found id: "2afddde1486e2dfe821e2e23866ced6e5b833b9bc83463adad53a55d1941cd8e"
	I1109 13:31:16.012661   21565 cri.go:89] found id: "c2e8f6876246e31a51938f36acf9e6262ba61dc5526f4f4a5e8789cd851d46cf"
	I1109 13:31:16.012668   21565 cri.go:89] found id: "c947eeaaf49bfe6ebb73d36bcfba2b027085814dd9abc647f0e0692d4b65939e"
	I1109 13:31:16.012676   21565 cri.go:89] found id: "3c9fabcff63aa51893e804edf796bca0360c29c53e840636c0db34b4037bde26"
	I1109 13:31:16.012680   21565 cri.go:89] found id: "057fd0d666013bd4f0e02f4749164afa0a5d0906ec2754e1278f051e29dbe2aa"
	I1109 13:31:16.012687   21565 cri.go:89] found id: "5e49dd8732922347f346f66fdc4fee803771be2e5ef0d4d345d966b70ab95d61"
	I1109 13:31:16.012692   21565 cri.go:89] found id: "403c426d7512024ca15621249cc490aed2d29fd9c26fd69e34ee8a4865f6ae84"
	I1109 13:31:16.012699   21565 cri.go:89] found id: "8c332e600a86f5db4cbaefa708b6b131d75d12b1625fc751633649df36046ad6"
	I1109 13:31:16.012706   21565 cri.go:89] found id: "e93137eb9f50682474df733b717841dc665886c5322e88ebbcc5f7daeff9c7f6"
	I1109 13:31:16.012714   21565 cri.go:89] found id: "befdac5dae601c24846eafd867c1ffdaf970739dfdb6544290f7d401fb479adb"
	I1109 13:31:16.012721   21565 cri.go:89] found id: "62effce4d440574a561ee8bb4713a143cff8c4dd5102c8837cbcb323a55c6f8e"
	I1109 13:31:16.012725   21565 cri.go:89] found id: "79692b5ce1377fdc3438f677558138b88bf7d705bda4ddca28079953b176faf6"
	I1109 13:31:16.012729   21565 cri.go:89] found id: "5af868c65929f995aa32fd9d978e8b01f76671c795d79d2fdd5e8b5636497bb7"
	I1109 13:31:16.012737   21565 cri.go:89] found id: "5bb7efe058cec23686f0732d66f5c759521ce0362ae0dac95c722fbe50e4ea21"
	I1109 13:31:16.012744   21565 cri.go:89] found id: "861090a7ec88133657d8ba74914277d902ce94237d6ff1f5d419ebee604b79da"
	I1109 13:31:16.012748   21565 cri.go:89] found id: "790087032ffe1bf767edeef5915a65edb0e4e1d93fd292a3746fd7e1aeb43138"
	I1109 13:31:16.012756   21565 cri.go:89] found id: "09ed3ea08406406e4a9fa9f5b1bf6bfff6a0edd9133d05145b78a59503d5f47b"
	I1109 13:31:16.012760   21565 cri.go:89] found id: ""
	I1109 13:31:16.012799   21565 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:31:16.025422   21565 out.go:203] 
	W1109 13:31:16.026622   21565 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:31:16.026660   21565 out.go:285] * 
	* 
	W1109 13:31:16.029565   21565 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:31:16.030665   21565 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-762402 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.38s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (148.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-762402 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-762402 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-762402 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [93f5caf5-6d2a-477b-8a6c-a438f9593549] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [93f5caf5-6d2a-477b-8a6c-a438f9593549] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.033676292s
I1109 13:31:18.799513    9365 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-762402 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.616423579s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-762402 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-762402
helpers_test.go:243: (dbg) docker inspect addons-762402:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "821c1afb04ad22e843320e6377deb93527e6ad7c99baba694ddb4ac0ff97e5b1",
	        "Created": "2025-11-09T13:29:15.097575436Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11340,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T13:29:15.128393835Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/821c1afb04ad22e843320e6377deb93527e6ad7c99baba694ddb4ac0ff97e5b1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/821c1afb04ad22e843320e6377deb93527e6ad7c99baba694ddb4ac0ff97e5b1/hostname",
	        "HostsPath": "/var/lib/docker/containers/821c1afb04ad22e843320e6377deb93527e6ad7c99baba694ddb4ac0ff97e5b1/hosts",
	        "LogPath": "/var/lib/docker/containers/821c1afb04ad22e843320e6377deb93527e6ad7c99baba694ddb4ac0ff97e5b1/821c1afb04ad22e843320e6377deb93527e6ad7c99baba694ddb4ac0ff97e5b1-json.log",
	        "Name": "/addons-762402",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-762402:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-762402",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "821c1afb04ad22e843320e6377deb93527e6ad7c99baba694ddb4ac0ff97e5b1",
	                "LowerDir": "/var/lib/docker/overlay2/e6a2b324288c15b6de54fa215af7bfd988b91baf8b258f3ffdddbbe00df26150-init/diff:/var/lib/docker/overlay2/d0c6d4b4ffbfb6af64ec3408030fc54111fe30d500088a5ae947d598cfc72f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e6a2b324288c15b6de54fa215af7bfd988b91baf8b258f3ffdddbbe00df26150/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e6a2b324288c15b6de54fa215af7bfd988b91baf8b258f3ffdddbbe00df26150/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e6a2b324288c15b6de54fa215af7bfd988b91baf8b258f3ffdddbbe00df26150/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-762402",
	                "Source": "/var/lib/docker/volumes/addons-762402/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-762402",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-762402",
	                "name.minikube.sigs.k8s.io": "addons-762402",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9c73163baf89e0a44d9d35f63c5bbf73045eadc00a8cc4feef704f6b1ccd5cd1",
	            "SandboxKey": "/var/run/docker/netns/9c73163baf89",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-762402": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:71:c5:60:f8:fe",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d89a87f039a77445f033266b233e8ec4079eeadc9cdaa00ebb680ec78f070cc4",
	                    "EndpointID": "34b003a2e84c4cda2fceb53081a266378dc792aa714e2d36f367a8f413ded0a4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-762402",
	                        "821c1afb04ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-762402 -n addons-762402
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-762402 logs -n 25: (1.072110779s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-557048 --alsologtostderr --binary-mirror http://127.0.0.1:42397 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-557048 │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │                     │
	│ delete  │ -p binary-mirror-557048                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-557048 │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │ 09 Nov 25 13:28 UTC │
	│ addons  │ enable dashboard -p addons-762402                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │                     │
	│ addons  │ disable dashboard -p addons-762402                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │                     │
	│ start   │ -p addons-762402 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │ 09 Nov 25 13:30 UTC │
	│ addons  │ addons-762402 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:30 UTC │                     │
	│ addons  │ addons-762402 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-762402 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │                     │
	│ addons  │ addons-762402 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │                     │
	│ addons  │ addons-762402 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │                     │
	│ addons  │ addons-762402 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │                     │
	│ ssh     │ addons-762402 ssh cat /opt/local-path-provisioner/pvc-762784ac-7e30-4ec8-bec8-a2511c62cb32_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ addons-762402 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │                     │
	│ addons  │ addons-762402 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │                     │
	│ ip      │ addons-762402 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ addons-762402 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-762402                                                                                                                                                                                                                                                                                                                                                                                           │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ addons-762402 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │                     │
	│ addons  │ addons-762402 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │                     │
	│ ssh     │ addons-762402 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │                     │
	│ addons  │ addons-762402 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │                     │
	│ addons  │ addons-762402 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │                     │
	│ addons  │ addons-762402 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	│ addons  │ addons-762402 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	│ ip      │ addons-762402 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-762402        │ jenkins │ v1.37.0 │ 09 Nov 25 13:33 UTC │ 09 Nov 25 13:33 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:28:51
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:28:51.600069   10696 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:28:51.600301   10696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:28:51.600310   10696 out.go:374] Setting ErrFile to fd 2...
	I1109 13:28:51.600317   10696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:28:51.600491   10696 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:28:51.600961   10696 out.go:368] Setting JSON to false
	I1109 13:28:51.601742   10696 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":682,"bootTime":1762694250,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 13:28:51.601814   10696 start.go:143] virtualization: kvm guest
	I1109 13:28:51.603408   10696 out.go:179] * [addons-762402] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 13:28:51.604561   10696 notify.go:221] Checking for updates...
	I1109 13:28:51.604578   10696 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:28:51.605700   10696 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:28:51.606781   10696 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 13:28:51.607878   10696 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 13:28:51.608938   10696 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 13:28:51.610065   10696 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:28:51.611385   10696 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:28:51.633924   10696 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 13:28:51.633980   10696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:28:51.685216   10696 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-09 13:28:51.676680974 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 13:28:51.685304   10696 docker.go:319] overlay module found
	I1109 13:28:51.687526   10696 out.go:179] * Using the docker driver based on user configuration
	I1109 13:28:51.688526   10696 start.go:309] selected driver: docker
	I1109 13:28:51.688537   10696 start.go:930] validating driver "docker" against <nil>
	I1109 13:28:51.688546   10696 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:28:51.689040   10696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:28:51.738660   10696 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-09 13:28:51.729890421 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 13:28:51.738844   10696 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 13:28:51.739104   10696 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:28:51.740630   10696 out.go:179] * Using Docker driver with root privileges
	I1109 13:28:51.741825   10696 cni.go:84] Creating CNI manager for ""
	I1109 13:28:51.741876   10696 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 13:28:51.741885   10696 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 13:28:51.741933   10696 start.go:353] cluster config:
	{Name:addons-762402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-762402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1109 13:28:51.743096   10696 out.go:179] * Starting "addons-762402" primary control-plane node in "addons-762402" cluster
	I1109 13:28:51.744237   10696 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 13:28:51.745347   10696 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 13:28:51.746361   10696 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:28:51.746385   10696 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 13:28:51.746383   10696 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 13:28:51.746391   10696 cache.go:65] Caching tarball of preloaded images
	I1109 13:28:51.746461   10696 preload.go:238] Found /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 13:28:51.746471   10696 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 13:28:51.746798   10696 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/config.json ...
	I1109 13:28:51.746832   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/config.json: {Name:mkdd4030f0ca96ade544f1277301cec246e906a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:28:51.761961   10696 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1109 13:28:51.762059   10696 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1109 13:28:51.762074   10696 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1109 13:28:51.762078   10696 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1109 13:28:51.762084   10696 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1109 13:28:51.762091   10696 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from local cache
	I1109 13:29:04.099272   10696 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from cached tarball
	I1109 13:29:04.099317   10696 cache.go:243] Successfully downloaded all kic artifacts
	I1109 13:29:04.099358   10696 start.go:360] acquireMachinesLock for addons-762402: {Name:mkb378b64899117f3c03bff88efab238bc9c3942 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:29:04.099457   10696 start.go:364] duration metric: took 77.657µs to acquireMachinesLock for "addons-762402"
	I1109 13:29:04.099484   10696 start.go:93] Provisioning new machine with config: &{Name:addons-762402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-762402 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:29:04.099573   10696 start.go:125] createHost starting for "" (driver="docker")
	I1109 13:29:04.101685   10696 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1109 13:29:04.101903   10696 start.go:159] libmachine.API.Create for "addons-762402" (driver="docker")
	I1109 13:29:04.101938   10696 client.go:173] LocalClient.Create starting
	I1109 13:29:04.102045   10696 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem
	I1109 13:29:04.275693   10696 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem
	I1109 13:29:04.414253   10696 cli_runner.go:164] Run: docker network inspect addons-762402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 13:29:04.430476   10696 cli_runner.go:211] docker network inspect addons-762402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 13:29:04.430528   10696 network_create.go:284] running [docker network inspect addons-762402] to gather additional debugging logs...
	I1109 13:29:04.430549   10696 cli_runner.go:164] Run: docker network inspect addons-762402
	W1109 13:29:04.445800   10696 cli_runner.go:211] docker network inspect addons-762402 returned with exit code 1
	I1109 13:29:04.445831   10696 network_create.go:287] error running [docker network inspect addons-762402]: docker network inspect addons-762402: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-762402 not found
	I1109 13:29:04.445849   10696 network_create.go:289] output of [docker network inspect addons-762402]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-762402 not found
	
	** /stderr **
	I1109 13:29:04.445947   10696 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 13:29:04.461367   10696 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002009270}
	I1109 13:29:04.461403   10696 network_create.go:124] attempt to create docker network addons-762402 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1109 13:29:04.461446   10696 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-762402 addons-762402
	I1109 13:29:04.513425   10696 network_create.go:108] docker network addons-762402 192.168.49.0/24 created
	I1109 13:29:04.513452   10696 kic.go:121] calculated static IP "192.168.49.2" for the "addons-762402" container
	I1109 13:29:04.513512   10696 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 13:29:04.528365   10696 cli_runner.go:164] Run: docker volume create addons-762402 --label name.minikube.sigs.k8s.io=addons-762402 --label created_by.minikube.sigs.k8s.io=true
	I1109 13:29:04.544338   10696 oci.go:103] Successfully created a docker volume addons-762402
	I1109 13:29:04.544389   10696 cli_runner.go:164] Run: docker run --rm --name addons-762402-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-762402 --entrypoint /usr/bin/test -v addons-762402:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 13:29:10.792361   10696 cli_runner.go:217] Completed: docker run --rm --name addons-762402-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-762402 --entrypoint /usr/bin/test -v addons-762402:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (6.247922954s)
	I1109 13:29:10.792393   10696 oci.go:107] Successfully prepared a docker volume addons-762402
	I1109 13:29:10.792445   10696 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:29:10.792460   10696 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 13:29:10.792526   10696 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-762402:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 13:29:15.027728   10696 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-762402:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.23516231s)
	I1109 13:29:15.027773   10696 kic.go:203] duration metric: took 4.235309729s to extract preloaded images to volume ...
	W1109 13:29:15.027871   10696 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1109 13:29:15.027901   10696 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1109 13:29:15.027937   10696 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 13:29:15.083197   10696 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-762402 --name addons-762402 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-762402 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-762402 --network addons-762402 --ip 192.168.49.2 --volume addons-762402:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 13:29:15.392455   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Running}}
	I1109 13:29:15.408924   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:15.427153   10696 cli_runner.go:164] Run: docker exec addons-762402 stat /var/lib/dpkg/alternatives/iptables
	I1109 13:29:15.469947   10696 oci.go:144] the created container "addons-762402" has a running status.
	I1109 13:29:15.469982   10696 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa...
	I1109 13:29:16.033842   10696 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 13:29:16.057897   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:16.073654   10696 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 13:29:16.073674   10696 kic_runner.go:114] Args: [docker exec --privileged addons-762402 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 13:29:16.114282   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:16.130208   10696 machine.go:94] provisionDockerMachine start ...
	I1109 13:29:16.130288   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:16.146042   10696 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:16.146277   10696 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1109 13:29:16.146293   10696 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 13:29:16.267677   10696 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-762402
	
	I1109 13:29:16.267699   10696 ubuntu.go:182] provisioning hostname "addons-762402"
	I1109 13:29:16.267758   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:16.285314   10696 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:16.285500   10696 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1109 13:29:16.285513   10696 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-762402 && echo "addons-762402" | sudo tee /etc/hostname
	I1109 13:29:16.415536   10696 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-762402
	
	I1109 13:29:16.415596   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:16.432514   10696 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:16.432722   10696 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1109 13:29:16.432739   10696 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-762402' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-762402/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-762402' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 13:29:16.554254   10696 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:29:16.554278   10696 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-5854/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-5854/.minikube}
	I1109 13:29:16.554304   10696 ubuntu.go:190] setting up certificates
	I1109 13:29:16.554313   10696 provision.go:84] configureAuth start
	I1109 13:29:16.554388   10696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-762402
	I1109 13:29:16.570560   10696 provision.go:143] copyHostCerts
	I1109 13:29:16.570627   10696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem (1078 bytes)
	I1109 13:29:16.570771   10696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem (1123 bytes)
	I1109 13:29:16.570847   10696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem (1679 bytes)
	I1109 13:29:16.570918   10696 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem org=jenkins.addons-762402 san=[127.0.0.1 192.168.49.2 addons-762402 localhost minikube]
	I1109 13:29:16.712281   10696 provision.go:177] copyRemoteCerts
	I1109 13:29:16.712335   10696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 13:29:16.712367   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:16.729261   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:16.819704   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 13:29:16.836632   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 13:29:16.851496   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 13:29:16.866651   10696 provision.go:87] duration metric: took 312.316652ms to configureAuth
	I1109 13:29:16.866674   10696 ubuntu.go:206] setting minikube options for container-runtime
	I1109 13:29:16.866806   10696 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:29:16.866888   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:16.883238   10696 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:16.883473   10696 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1109 13:29:16.883497   10696 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 13:29:17.109605   10696 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 13:29:17.109627   10696 machine.go:97] duration metric: took 979.402343ms to provisionDockerMachine
	I1109 13:29:17.109664   10696 client.go:176] duration metric: took 13.007715858s to LocalClient.Create
	I1109 13:29:17.109684   10696 start.go:167] duration metric: took 13.007781712s to libmachine.API.Create "addons-762402"
	I1109 13:29:17.109695   10696 start.go:293] postStartSetup for "addons-762402" (driver="docker")
	I1109 13:29:17.109707   10696 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 13:29:17.109768   10696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 13:29:17.109817   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:17.126600   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:17.217963   10696 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 13:29:17.220946   10696 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 13:29:17.220967   10696 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 13:29:17.220976   10696 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/addons for local assets ...
	I1109 13:29:17.221016   10696 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/files for local assets ...
	I1109 13:29:17.221036   10696 start.go:296] duration metric: took 111.335357ms for postStartSetup
	I1109 13:29:17.221269   10696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-762402
	I1109 13:29:17.238120   10696 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/config.json ...
	I1109 13:29:17.238332   10696 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:29:17.238366   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:17.253826   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:17.341788   10696 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 13:29:17.345817   10696 start.go:128] duration metric: took 13.246232223s to createHost
	I1109 13:29:17.345835   10696 start.go:83] releasing machines lock for "addons-762402", held for 13.246364553s
	I1109 13:29:17.345894   10696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-762402
	I1109 13:29:17.362599   10696 ssh_runner.go:195] Run: cat /version.json
	I1109 13:29:17.362665   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:17.362669   10696 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 13:29:17.362718   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:17.380132   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:17.380262   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:17.528691   10696 ssh_runner.go:195] Run: systemctl --version
	I1109 13:29:17.534292   10696 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 13:29:17.564761   10696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 13:29:17.568813   10696 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 13:29:17.568876   10696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 13:29:17.591955   10696 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1109 13:29:17.591971   10696 start.go:496] detecting cgroup driver to use...
	I1109 13:29:17.591993   10696 detect.go:190] detected "systemd" cgroup driver on host os
	I1109 13:29:17.592030   10696 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 13:29:17.605944   10696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 13:29:17.616507   10696 docker.go:218] disabling cri-docker service (if available) ...
	I1109 13:29:17.616548   10696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 13:29:17.630930   10696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 13:29:17.646055   10696 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 13:29:17.721903   10696 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 13:29:17.802173   10696 docker.go:234] disabling docker service ...
	I1109 13:29:17.802221   10696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 13:29:17.817723   10696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 13:29:17.828570   10696 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 13:29:17.904433   10696 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 13:29:17.980708   10696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 13:29:17.991266   10696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 13:29:18.003629   10696 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 13:29:18.003686   10696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:18.012603   10696 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1109 13:29:18.012659   10696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:18.020531   10696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:18.028227   10696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:18.035792   10696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 13:29:18.042765   10696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:18.050193   10696 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:18.061726   10696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:18.069256   10696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 13:29:18.075781   10696 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1109 13:29:18.075823   10696 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1109 13:29:18.086408   10696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 13:29:18.092836   10696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:29:18.165914   10696 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 13:29:18.263321   10696 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 13:29:18.263387   10696 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 13:29:18.266952   10696 start.go:564] Will wait 60s for crictl version
	I1109 13:29:18.266994   10696 ssh_runner.go:195] Run: which crictl
	I1109 13:29:18.270176   10696 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 13:29:18.292962   10696 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 13:29:18.293055   10696 ssh_runner.go:195] Run: crio --version
	I1109 13:29:18.318013   10696 ssh_runner.go:195] Run: crio --version
	I1109 13:29:18.343928   10696 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 13:29:18.345071   10696 cli_runner.go:164] Run: docker network inspect addons-762402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 13:29:18.361160   10696 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 13:29:18.364725   10696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:29:18.373807   10696 kubeadm.go:884] updating cluster {Name:addons-762402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-762402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 13:29:18.373917   10696 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:29:18.373954   10696 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:29:18.401424   10696 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:29:18.401440   10696 crio.go:433] Images already preloaded, skipping extraction
	I1109 13:29:18.401472   10696 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:29:18.423831   10696 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:29:18.423847   10696 cache_images.go:86] Images are preloaded, skipping loading
	I1109 13:29:18.423854   10696 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1109 13:29:18.423927   10696 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-762402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-762402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 13:29:18.423982   10696 ssh_runner.go:195] Run: crio config
	I1109 13:29:18.465006   10696 cni.go:84] Creating CNI manager for ""
	I1109 13:29:18.465030   10696 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 13:29:18.465049   10696 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 13:29:18.465072   10696 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-762402 NodeName:addons-762402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 13:29:18.465207   10696 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-762402"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 13:29:18.465268   10696 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 13:29:18.472401   10696 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 13:29:18.472449   10696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 13:29:18.479411   10696 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 13:29:18.490717   10696 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 13:29:18.504284   10696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1109 13:29:18.515473   10696 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1109 13:29:18.518634   10696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:29:18.527386   10696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:29:18.605975   10696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:29:18.629524   10696 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402 for IP: 192.168.49.2
	I1109 13:29:18.629545   10696 certs.go:195] generating shared ca certs ...
	I1109 13:29:18.629563   10696 certs.go:227] acquiring lock for ca certs: {Name:mk1fbc7fb5aaf0e87090d75145f91c095ef07289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:18.629714   10696 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key
	I1109 13:29:18.784021   10696 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt ...
	I1109 13:29:18.784046   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt: {Name:mkec03d697f45aeb041c27c88860e2fa28d1fd26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:18.784199   10696 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key ...
	I1109 13:29:18.784209   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key: {Name:mkc8972f7a276c3b9e2064bd653c301100f1c2d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:18.784281   10696 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key
	I1109 13:29:19.153419   10696 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt ...
	I1109 13:29:19.153443   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt: {Name:mk47ed1f12a8fbfc55cbef6d30c0da65835c47ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:19.153611   10696 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key ...
	I1109 13:29:19.153623   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key: {Name:mk89d6a4f617bf3b6cc9fde532fe32e3368602fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:19.153728   10696 certs.go:257] generating profile certs ...
	I1109 13:29:19.153782   10696 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.key
	I1109 13:29:19.153795   10696 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt with IP's: []
	I1109 13:29:19.727372   10696 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt ...
	I1109 13:29:19.727399   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: {Name:mkeac7e44f29a869869e9a50a16f513beb3c0eec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:19.727560   10696 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.key ...
	I1109 13:29:19.727570   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.key: {Name:mk871ff6f1019eadfaa466e0dd5301226c74d694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:19.727654   10696 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.key.f007e2c6
	I1109 13:29:19.727672   10696 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.crt.f007e2c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1109 13:29:19.966032   10696 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.crt.f007e2c6 ...
	I1109 13:29:19.966057   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.crt.f007e2c6: {Name:mk30c7821a4db207a680fad2f35e7f865ebaf808 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:19.966193   10696 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.key.f007e2c6 ...
	I1109 13:29:19.966205   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.key.f007e2c6: {Name:mk6a9b75003de9d61be5a994a207c1ef5db0240a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:19.966275   10696 certs.go:382] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.crt.f007e2c6 -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.crt
	I1109 13:29:19.966350   10696 certs.go:386] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.key.f007e2c6 -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.key
	I1109 13:29:19.966398   10696 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/proxy-client.key
	I1109 13:29:19.966414   10696 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/proxy-client.crt with IP's: []
	I1109 13:29:20.065922   10696 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/proxy-client.crt ...
	I1109 13:29:20.065945   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/proxy-client.crt: {Name:mk64708e7e19aab5fc191499498e0bb88944b34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:20.066090   10696 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/proxy-client.key ...
	I1109 13:29:20.066100   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/proxy-client.key: {Name:mk172cda03059e7d89d250b1ec8c6cc1f7d6eba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:20.066258   10696 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 13:29:20.066289   10696 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 13:29:20.066312   10696 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 13:29:20.066332   10696 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 13:29:20.066920   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 13:29:20.083722   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 13:29:20.099141   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 13:29:20.114190   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 13:29:20.128873   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1109 13:29:20.143941   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 13:29:20.159152   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 13:29:20.174141   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 13:29:20.189274   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 13:29:20.206059   10696 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 13:29:20.216988   10696 ssh_runner.go:195] Run: openssl version
	I1109 13:29:20.222599   10696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 13:29:20.236118   10696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:29:20.239749   10696 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:29:20.239798   10696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:29:20.274314   10696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 13:29:20.281994   10696 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 13:29:20.285189   10696 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 13:29:20.285246   10696 kubeadm.go:401] StartCluster: {Name:addons-762402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-762402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:29:20.285321   10696 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:29:20.285370   10696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:29:20.309578   10696 cri.go:89] found id: ""
	I1109 13:29:20.309629   10696 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 13:29:20.316378   10696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 13:29:20.323127   10696 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 13:29:20.323169   10696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 13:29:20.329783   10696 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 13:29:20.329797   10696 kubeadm.go:158] found existing configuration files:
	
	I1109 13:29:20.329821   10696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 13:29:20.336502   10696 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 13:29:20.336545   10696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 13:29:20.342897   10696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 13:29:20.349498   10696 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 13:29:20.349542   10696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 13:29:20.355920   10696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 13:29:20.362346   10696 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 13:29:20.362389   10696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 13:29:20.368601   10696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 13:29:20.375028   10696 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 13:29:20.375070   10696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 13:29:20.381435   10696 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 13:29:20.413146   10696 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 13:29:20.413194   10696 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 13:29:20.431213   10696 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 13:29:20.431284   10696 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1109 13:29:20.431350   10696 kubeadm.go:319] OS: Linux
	I1109 13:29:20.431449   10696 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 13:29:20.431527   10696 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 13:29:20.431599   10696 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 13:29:20.431683   10696 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 13:29:20.431753   10696 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 13:29:20.431817   10696 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 13:29:20.431900   10696 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 13:29:20.431980   10696 kubeadm.go:319] CGROUPS_IO: enabled
	I1109 13:29:20.482271   10696 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 13:29:20.482391   10696 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 13:29:20.482526   10696 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 13:29:20.489475   10696 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 13:29:20.491279   10696 out.go:252]   - Generating certificates and keys ...
	I1109 13:29:20.491347   10696 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 13:29:20.491405   10696 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 13:29:20.725867   10696 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 13:29:21.379532   10696 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 13:29:21.689892   10696 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 13:29:21.743695   10696 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 13:29:21.979264   10696 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 13:29:21.979442   10696 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-762402 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1109 13:29:22.076345   10696 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 13:29:22.076479   10696 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-762402 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1109 13:29:22.388420   10696 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 13:29:22.751667   10696 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 13:29:22.894049   10696 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 13:29:22.894143   10696 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 13:29:22.926745   10696 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 13:29:23.010543   10696 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 13:29:23.193007   10696 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 13:29:23.516027   10696 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 13:29:23.572292   10696 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 13:29:23.572750   10696 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 13:29:23.576156   10696 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 13:29:23.578164   10696 out.go:252]   - Booting up control plane ...
	I1109 13:29:23.578249   10696 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 13:29:23.578317   10696 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 13:29:23.579140   10696 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 13:29:23.591534   10696 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 13:29:23.591711   10696 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 13:29:23.598539   10696 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 13:29:23.598878   10696 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 13:29:23.598919   10696 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 13:29:23.690236   10696 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 13:29:23.690367   10696 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 13:29:24.691869   10696 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001678637s
	I1109 13:29:24.694685   10696 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 13:29:24.694803   10696 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1109 13:29:24.694957   10696 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 13:29:24.695088   10696 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 13:29:25.523427   10696 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 828.648538ms
	I1109 13:29:26.544133   10696 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.849463088s
	I1109 13:29:28.195677   10696 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.500970932s
	I1109 13:29:28.206094   10696 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 13:29:28.213352   10696 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 13:29:28.220500   10696 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 13:29:28.220792   10696 kubeadm.go:319] [mark-control-plane] Marking the node addons-762402 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 13:29:28.227107   10696 kubeadm.go:319] [bootstrap-token] Using token: yfmz4d.ygaatjqzsyeab290
	I1109 13:29:28.228174   10696 out.go:252]   - Configuring RBAC rules ...
	I1109 13:29:28.228306   10696 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 13:29:28.230729   10696 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 13:29:28.235423   10696 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 13:29:28.237426   10696 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 13:29:28.239438   10696 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 13:29:28.241402   10696 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 13:29:28.600384   10696 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 13:29:29.012474   10696 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 13:29:29.602473   10696 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 13:29:29.603496   10696 kubeadm.go:319] 
	I1109 13:29:29.603582   10696 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 13:29:29.603604   10696 kubeadm.go:319] 
	I1109 13:29:29.603747   10696 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 13:29:29.603763   10696 kubeadm.go:319] 
	I1109 13:29:29.603813   10696 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 13:29:29.603909   10696 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 13:29:29.603991   10696 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 13:29:29.604000   10696 kubeadm.go:319] 
	I1109 13:29:29.604084   10696 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 13:29:29.604094   10696 kubeadm.go:319] 
	I1109 13:29:29.604159   10696 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 13:29:29.604176   10696 kubeadm.go:319] 
	I1109 13:29:29.604252   10696 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 13:29:29.604364   10696 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 13:29:29.604467   10696 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 13:29:29.604479   10696 kubeadm.go:319] 
	I1109 13:29:29.604600   10696 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 13:29:29.604701   10696 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 13:29:29.604708   10696 kubeadm.go:319] 
	I1109 13:29:29.604776   10696 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token yfmz4d.ygaatjqzsyeab290 \
	I1109 13:29:29.604867   10696 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f \
	I1109 13:29:29.604898   10696 kubeadm.go:319] 	--control-plane 
	I1109 13:29:29.604906   10696 kubeadm.go:319] 
	I1109 13:29:29.605005   10696 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 13:29:29.605017   10696 kubeadm.go:319] 
	I1109 13:29:29.605125   10696 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token yfmz4d.ygaatjqzsyeab290 \
	I1109 13:29:29.605248   10696 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f 
	I1109 13:29:29.607094   10696 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1109 13:29:29.607187   10696 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 13:29:29.607205   10696 cni.go:84] Creating CNI manager for ""
	I1109 13:29:29.607212   10696 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 13:29:29.608611   10696 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1109 13:29:29.609687   10696 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 13:29:29.613562   10696 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 13:29:29.613579   10696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1109 13:29:29.625898   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 13:29:29.811744   10696 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 13:29:29.811833   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:29.811843   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-762402 minikube.k8s.io/updated_at=2025_11_09T13_29_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=addons-762402 minikube.k8s.io/primary=true
	I1109 13:29:29.820434   10696 ops.go:34] apiserver oom_adj: -16
	I1109 13:29:29.890847   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:30.391342   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:30.891650   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:31.390970   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:31.891406   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:32.391556   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:32.891129   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:33.391148   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:33.891714   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:34.391462   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:34.449561   10696 kubeadm.go:1114] duration metric: took 4.637795505s to wait for elevateKubeSystemPrivileges
	I1109 13:29:34.449600   10696 kubeadm.go:403] duration metric: took 14.164359999s to StartCluster
	I1109 13:29:34.449623   10696 settings.go:142] acquiring lock: {Name:mk4e77b290eba64589404bd2a5f48c72505ab262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:34.449761   10696 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 13:29:34.450184   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:34.450369   10696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 13:29:34.450404   10696 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:29:34.450452   10696 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1109 13:29:34.450582   10696 addons.go:70] Setting ingress-dns=true in profile "addons-762402"
	I1109 13:29:34.450602   10696 addons.go:70] Setting inspektor-gadget=true in profile "addons-762402"
	I1109 13:29:34.450619   10696 addons.go:239] Setting addon inspektor-gadget=true in "addons-762402"
	I1109 13:29:34.450620   10696 addons.go:239] Setting addon ingress-dns=true in "addons-762402"
	I1109 13:29:34.450631   10696 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:29:34.450665   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.450674   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.450692   10696 addons.go:70] Setting ingress=true in profile "addons-762402"
	I1109 13:29:34.450702   10696 addons.go:70] Setting default-storageclass=true in profile "addons-762402"
	I1109 13:29:34.450704   10696 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-762402"
	I1109 13:29:34.450684   10696 addons.go:70] Setting gcp-auth=true in profile "addons-762402"
	I1109 13:29:34.450718   10696 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-762402"
	I1109 13:29:34.450737   10696 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-762402"
	I1109 13:29:34.450744   10696 addons.go:70] Setting registry-creds=true in profile "addons-762402"
	I1109 13:29:34.450751   10696 mustload.go:66] Loading cluster: addons-762402
	I1109 13:29:34.450755   10696 addons.go:239] Setting addon registry-creds=true in "addons-762402"
	I1109 13:29:34.450774   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.450803   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.450922   10696 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-762402"
	I1109 13:29:34.450977   10696 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-762402"
	I1109 13:29:34.451007   10696 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:29:34.451047   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.451208   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.451240   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.451254   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.451277   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.451321   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.451326   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.451541   10696 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-762402"
	I1109 13:29:34.451561   10696 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-762402"
	I1109 13:29:34.451586   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.450718   10696 addons.go:70] Setting cloud-spanner=true in profile "addons-762402"
	I1109 13:29:34.451851   10696 addons.go:239] Setting addon cloud-spanner=true in "addons-762402"
	I1109 13:29:34.451866   10696 addons.go:70] Setting volcano=true in profile "addons-762402"
	I1109 13:29:34.451877   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.451884   10696 addons.go:239] Setting addon volcano=true in "addons-762402"
	I1109 13:29:34.451917   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.452073   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.452351   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.452363   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.452714   10696 addons.go:70] Setting metrics-server=true in profile "addons-762402"
	I1109 13:29:34.452760   10696 addons.go:239] Setting addon metrics-server=true in "addons-762402"
	I1109 13:29:34.452785   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.450594   10696 addons.go:70] Setting yakd=true in profile "addons-762402"
	I1109 13:29:34.452848   10696 addons.go:239] Setting addon yakd=true in "addons-762402"
	I1109 13:29:34.452887   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.453233   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.453406   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.453716   10696 addons.go:70] Setting registry=true in profile "addons-762402"
	I1109 13:29:34.453737   10696 addons.go:239] Setting addon registry=true in "addons-762402"
	I1109 13:29:34.453761   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.453818   10696 out.go:179] * Verifying Kubernetes components...
	I1109 13:29:34.450711   10696 addons.go:239] Setting addon ingress=true in "addons-762402"
	I1109 13:29:34.454309   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.454882   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.455683   10696 addons.go:70] Setting storage-provisioner=true in profile "addons-762402"
	I1109 13:29:34.455704   10696 addons.go:239] Setting addon storage-provisioner=true in "addons-762402"
	I1109 13:29:34.455741   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.455915   10696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:29:34.456550   10696 addons.go:70] Setting volumesnapshots=true in profile "addons-762402"
	I1109 13:29:34.456570   10696 addons.go:239] Setting addon volumesnapshots=true in "addons-762402"
	I1109 13:29:34.456594   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.450684   10696 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-762402"
	I1109 13:29:34.457509   10696 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-762402"
	I1109 13:29:34.457542   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.462154   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.462302   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.463135   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.464354   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.510779   10696 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1109 13:29:34.512094   10696 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1109 13:29:34.512373   10696 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1109 13:29:34.512396   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1109 13:29:34.512449   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.513325   10696 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1109 13:29:34.513344   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1109 13:29:34.513392   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.527812   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.530111   10696 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1109 13:29:34.530169   10696 out.go:179]   - Using image docker.io/registry:3.0.0
	I1109 13:29:34.530366   10696 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1109 13:29:34.531937   10696 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1109 13:29:34.531956   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1109 13:29:34.532004   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.532145   10696 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1109 13:29:34.532159   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1109 13:29:34.532221   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.534004   10696 addons.go:239] Setting addon default-storageclass=true in "addons-762402"
	I1109 13:29:34.534073   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.534697   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.536577   10696 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1109 13:29:34.537886   10696 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1109 13:29:34.537947   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1109 13:29:34.538042   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.546841   10696 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1109 13:29:34.546920   10696 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1109 13:29:34.547792   10696 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1109 13:29:34.547806   10696 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1109 13:29:34.547864   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.551621   10696 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1109 13:29:34.551653   10696 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1109 13:29:34.551711   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.561584   10696 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1109 13:29:34.562861   10696 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1109 13:29:34.562881   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1109 13:29:34.562931   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	W1109 13:29:34.567458   10696 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1109 13:29:34.567654   10696 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:29:34.572784   10696 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:29:34.574205   10696 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1109 13:29:34.575585   10696 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-762402"
	I1109 13:29:34.575631   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.576068   10696 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1109 13:29:34.579707   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1109 13:29:34.579774   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.580165   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.584241   10696 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1109 13:29:34.584306   10696 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1109 13:29:34.584241   10696 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 13:29:34.585397   10696 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1109 13:29:34.585413   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1109 13:29:34.585472   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.586232   10696 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 13:29:34.586247   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 13:29:34.586292   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.590615   10696 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1109 13:29:34.591917   10696 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1109 13:29:34.594219   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.597797   10696 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1109 13:29:34.601673   10696 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1109 13:29:34.602349   10696 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1109 13:29:34.603723   10696 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1109 13:29:34.604747   10696 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1109 13:29:34.605753   10696 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1109 13:29:34.605833   10696 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1109 13:29:34.605860   10696 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1109 13:29:34.605920   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.606733   10696 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1109 13:29:34.606755   10696 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1109 13:29:34.606817   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.606873   10696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 13:29:34.610711   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.613483   10696 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 13:29:34.613502   10696 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 13:29:34.613553   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.615030   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.620964   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.626378   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.626823   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.628771   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.630211   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.642430   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.642597   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.660775   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.663373   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.667693   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	W1109 13:29:34.671251   10696 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 13:29:34.671335   10696 retry.go:31] will retry after 303.365831ms: ssh: handshake failed: EOF
	I1109 13:29:34.672781   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.676130   10696 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1109 13:29:34.677851   10696 out.go:179]   - Using image docker.io/busybox:stable
	I1109 13:29:34.678994   10696 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1109 13:29:34.679050   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1109 13:29:34.679165   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.683216   10696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:29:34.712660   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.778145   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1109 13:29:34.797130   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1109 13:29:34.800493   10696 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1109 13:29:34.800517   10696 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1109 13:29:34.803596   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 13:29:34.807937   10696 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1109 13:29:34.807965   10696 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1109 13:29:34.815500   10696 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1109 13:29:34.815517   10696 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1109 13:29:34.828206   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1109 13:29:34.835095   10696 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1109 13:29:34.835163   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1109 13:29:34.839882   10696 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1109 13:29:34.839898   10696 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1109 13:29:34.841199   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 13:29:34.845688   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1109 13:29:34.851741   10696 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1109 13:29:34.851759   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1109 13:29:34.851761   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1109 13:29:34.852888   10696 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1109 13:29:34.852941   10696 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1109 13:29:34.858452   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1109 13:29:34.858811   10696 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1109 13:29:34.858828   10696 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1109 13:29:34.867836   10696 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1109 13:29:34.867865   10696 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1109 13:29:34.883531   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1109 13:29:34.906908   10696 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1109 13:29:34.906932   10696 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1109 13:29:34.917610   10696 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1109 13:29:34.917648   10696 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1109 13:29:34.926727   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1109 13:29:34.932575   10696 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1109 13:29:34.932679   10696 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1109 13:29:34.947569   10696 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1109 13:29:34.947601   10696 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1109 13:29:34.972210   10696 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1109 13:29:34.972242   10696 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1109 13:29:34.986121   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1109 13:29:34.992400   10696 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1109 13:29:34.992430   10696 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1109 13:29:35.014539   10696 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1109 13:29:35.014582   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1109 13:29:35.037442   10696 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1109 13:29:35.037467   10696 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1109 13:29:35.045193   10696 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1109 13:29:35.045218   10696 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1109 13:29:35.082980   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1109 13:29:35.099035   10696 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:29:35.099066   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1109 13:29:35.107510   10696 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1109 13:29:35.107530   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1109 13:29:35.140857   10696 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1109 13:29:35.142724   10696 node_ready.go:35] waiting up to 6m0s for node "addons-762402" to be "Ready" ...
	I1109 13:29:35.173219   10696 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1109 13:29:35.173249   10696 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1109 13:29:35.218303   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:29:35.230346   10696 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1109 13:29:35.230434   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1109 13:29:35.231355   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1109 13:29:35.288613   10696 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1109 13:29:35.288636   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1109 13:29:35.325805   10696 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1109 13:29:35.325830   10696 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1109 13:29:35.383887   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1109 13:29:35.651250   10696 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-762402" context rescaled to 1 replicas
	I1109 13:29:35.786915   10696 addons.go:480] Verifying addon registry=true in "addons-762402"
	I1109 13:29:35.787183   10696 addons.go:480] Verifying addon metrics-server=true in "addons-762402"
	I1109 13:29:35.788554   10696 out.go:179] * Verifying registry addon...
	I1109 13:29:35.788620   10696 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-762402 service yakd-dashboard -n yakd-dashboard
	
	I1109 13:29:35.790675   10696 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1109 13:29:35.794008   10696 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1109 13:29:35.794075   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:36.293729   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:36.395392   10696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.177043555s)
	W1109 13:29:36.395446   10696 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1109 13:29:36.395468   10696 retry.go:31] will retry after 290.637821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1109 13:29:36.395480   10696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.164054616s)
	I1109 13:29:36.395497   10696 addons.go:480] Verifying addon ingress=true in "addons-762402"
	I1109 13:29:36.395782   10696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.011841449s)
	I1109 13:29:36.395814   10696 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-762402"
	I1109 13:29:36.397143   10696 out.go:179] * Verifying ingress addon...
	I1109 13:29:36.397145   10696 out.go:179] * Verifying csi-hostpath-driver addon...
	I1109 13:29:36.399702   10696 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1109 13:29:36.400486   10696 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1109 13:29:36.402400   10696 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1109 13:29:36.402414   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:36.403690   10696 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1109 13:29:36.403708   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:36.687175   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:29:36.793940   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:36.902220   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:36.902983   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1109 13:29:37.145013   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:37.293464   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:37.402517   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:37.402635   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:37.793107   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:37.902291   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:37.903097   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:38.293297   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:38.402226   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:38.402891   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:38.793381   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:38.902140   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:38.902731   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:39.103250   10696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.416035732s)
	W1109 13:29:39.145127   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:39.293383   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:39.402444   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:39.402943   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:39.793074   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:39.902394   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:39.903174   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:40.293269   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:40.402212   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:40.403019   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:40.792955   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:40.902009   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:40.902762   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:41.293288   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:41.402046   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:41.402852   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1109 13:29:41.644597   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:41.793413   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:41.902712   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:41.902724   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:42.137131   10696 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1109 13:29:42.137189   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:42.154744   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:42.250672   10696 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1109 13:29:42.261996   10696 addons.go:239] Setting addon gcp-auth=true in "addons-762402"
	I1109 13:29:42.262041   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:42.262354   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:42.279234   10696 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1109 13:29:42.279279   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:42.294261   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:42.296058   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:42.384804   10696 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:29:42.386101   10696 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1109 13:29:42.387067   10696 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1109 13:29:42.387082   10696 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1109 13:29:42.398866   10696 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1109 13:29:42.398882   10696 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1109 13:29:42.402330   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:42.402746   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:42.411222   10696 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1109 13:29:42.411235   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1109 13:29:42.422792   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1109 13:29:42.700493   10696 addons.go:480] Verifying addon gcp-auth=true in "addons-762402"
	I1109 13:29:42.701980   10696 out.go:179] * Verifying gcp-auth addon...
	I1109 13:29:42.703660   10696 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1109 13:29:42.705769   10696 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1109 13:29:42.705789   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:42.793595   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:42.902625   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:42.902760   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:43.205769   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:43.293447   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:43.402736   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:43.402767   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1109 13:29:43.645627   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:43.706936   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:43.807390   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:43.908083   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:43.908196   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:44.206064   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:44.292855   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:44.401936   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:44.402715   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:44.706265   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:44.793505   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:44.902618   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:44.902881   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:45.205864   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:45.292587   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:45.402841   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:45.402971   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:45.706125   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:45.793079   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:45.902805   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:45.903111   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1109 13:29:46.145229   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:46.206078   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:46.292861   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:46.402128   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:46.402948   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:46.706063   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:46.793088   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:46.902308   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:46.903033   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:47.206250   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:47.293120   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:47.402374   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:47.403047   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:47.706352   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:47.793716   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:47.901996   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:47.903246   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1109 13:29:48.145524   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:48.206813   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:48.292677   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:48.402757   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:48.402808   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:48.706032   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:48.793002   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:48.902146   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:48.902960   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:49.206390   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:49.293227   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:49.402324   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:49.403266   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:49.706831   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:49.792725   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:49.902104   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:49.902692   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:50.206438   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:50.293367   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:50.402608   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:50.402658   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1109 13:29:50.644912   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:50.705934   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:50.792922   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:50.901903   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:50.902907   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:51.206081   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:51.292809   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:51.401672   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:51.402627   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:51.705908   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:51.806404   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:51.907081   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:51.907084   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:52.206236   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:52.293089   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:52.402105   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:52.403035   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1109 13:29:52.645226   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:52.706097   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:52.793116   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:52.902160   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:52.903157   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:53.206114   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:53.293089   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:53.401983   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:53.402983   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:53.706098   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:53.793160   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:53.902760   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:53.903207   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:54.206380   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:54.293279   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:54.402393   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:54.403244   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1109 13:29:54.645665   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:54.706468   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:54.793620   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:54.902937   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:54.903125   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:55.206411   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:55.293293   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:55.402387   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:55.402403   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:55.706467   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:55.793313   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:55.902813   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:55.902866   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:56.205977   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:56.292629   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:56.402914   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:56.402968   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:56.705876   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:56.792604   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:56.902684   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:56.902685   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1109 13:29:57.144940   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:57.205813   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:57.292564   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:57.402861   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:57.403002   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:57.706187   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:57.793041   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:57.902275   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:57.902986   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:58.206220   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:58.292974   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:58.402270   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:58.403117   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:58.706135   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:58.793061   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:58.902074   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:58.903080   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1109 13:29:59.145421   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:59.206420   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:59.293156   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:59.402116   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:59.403096   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:59.706303   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:59.793266   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:59.902578   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:59.902600   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:00.205904   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:00.292732   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:00.401610   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:00.402531   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:00.705890   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:00.792838   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:00.902071   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:00.902976   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:01.206192   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:01.293414   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:01.402822   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:01.402959   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1109 13:30:01.645106   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:30:01.706249   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:01.793201   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:01.902161   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:01.903196   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:02.206118   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:02.292906   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:02.401973   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:02.402819   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:02.705978   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:02.793057   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:02.902356   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:02.903041   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:03.206036   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:03.293024   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:03.402223   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:03.402961   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:03.705560   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:03.793910   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:03.902378   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:03.903060   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1109 13:30:04.145621   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:30:04.206564   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:04.293513   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:04.403053   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:04.403128   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:04.706580   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:04.793713   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:04.903034   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:04.903057   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:05.206708   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:05.293609   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:05.402934   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:05.403078   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:05.706148   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:05.793309   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:05.902371   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:05.902462   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1109 13:30:06.145735   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:30:06.206812   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:06.293675   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:06.403002   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:06.403003   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:06.706155   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:06.793107   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:06.902101   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:06.902914   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:07.206076   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:07.293089   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:07.402200   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:07.403120   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:07.705832   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:07.792683   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:07.902848   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:07.902939   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:08.205913   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:08.292865   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:08.401913   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:08.402740   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1109 13:30:08.645106   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:30:08.706052   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:08.792899   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:08.902442   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:08.903135   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:09.206354   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:09.293363   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:09.402622   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:09.402622   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:09.706508   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:09.793590   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:09.902658   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:09.902671   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:10.205564   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:10.293405   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:10.402451   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:10.402707   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1109 13:30:10.645971   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:30:10.705972   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:10.793146   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:10.902288   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:10.903205   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:11.206319   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:11.293179   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:11.402211   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:11.403043   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:11.706097   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:11.793061   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:11.902053   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:11.902973   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:12.206300   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:12.293324   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:12.402415   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:12.402493   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:12.706554   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:12.793429   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:12.902608   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:12.902716   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1109 13:30:13.145843   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:30:13.205929   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:13.292591   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:13.402745   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:13.402791   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:13.706796   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:13.792669   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:13.902710   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:13.902746   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:14.206558   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:14.293567   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:14.403041   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:14.403047   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:14.706178   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:14.793300   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:14.902895   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:14.903043   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:15.205878   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:15.292576   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:15.402512   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:15.402611   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1109 13:30:15.645815   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:30:15.705933   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:15.793049   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:15.902481   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:15.903424   10696 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1109 13:30:15.903444   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:16.146132   10696 node_ready.go:49] node "addons-762402" is "Ready"
	I1109 13:30:16.146166   10696 node_ready.go:38] duration metric: took 41.003417549s for node "addons-762402" to be "Ready" ...
	I1109 13:30:16.146182   10696 api_server.go:52] waiting for apiserver process to appear ...
	I1109 13:30:16.146236   10696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:30:16.165826   10696 api_server.go:72] duration metric: took 41.715389771s to wait for apiserver process to appear ...
	I1109 13:30:16.165854   10696 api_server.go:88] waiting for apiserver healthz status ...
	I1109 13:30:16.165877   10696 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 13:30:16.170981   10696 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1109 13:30:16.172162   10696 api_server.go:141] control plane version: v1.34.1
	I1109 13:30:16.172191   10696 api_server.go:131] duration metric: took 6.329717ms to wait for apiserver health ...
	I1109 13:30:16.172202   10696 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 13:30:16.180912   10696 system_pods.go:59] 20 kube-system pods found
	I1109 13:30:16.180950   10696 system_pods.go:61] "amd-gpu-device-plugin-8nlkf" [c0515672-7870-4ffa-ade6-67e738f1ba34] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1109 13:30:16.180961   10696 system_pods.go:61] "coredns-66bc5c9577-lqlkm" [467ba0f6-9cb9-4b24-bc84-f1df381f2394] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:30:16.180972   10696 system_pods.go:61] "csi-hostpath-attacher-0" [950ae3e5-88d3-427d-b6ee-dbf6049459ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:30:16.180981   10696 system_pods.go:61] "csi-hostpath-resizer-0" [ef9e64b5-fbd2-4c9a-b8d2-8fa1712a7f2b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1109 13:30:16.180989   10696 system_pods.go:61] "csi-hostpathplugin-77pp6" [1b925317-a091-4b5c-b511-1d166aa717c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:30:16.180996   10696 system_pods.go:61] "etcd-addons-762402" [a4ed6417-0e73-4b80-8a1b-dff340148bca] Running
	I1109 13:30:16.181002   10696 system_pods.go:61] "kindnet-qcnps" [7eafef21-09ff-47a7-aff8-f25939707e51] Running
	I1109 13:30:16.181006   10696 system_pods.go:61] "kube-apiserver-addons-762402" [79d2841f-44e3-45a6-8315-7af6c941659e] Running
	I1109 13:30:16.181011   10696 system_pods.go:61] "kube-controller-manager-addons-762402" [7305d7af-58fb-4343-9207-62d64eede80c] Running
	I1109 13:30:16.181020   10696 system_pods.go:61] "kube-ingress-dns-minikube" [f4e40e66-3fff-46bc-a4f1-38f02c8f1754] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:30:16.181025   10696 system_pods.go:61] "kube-proxy-8b626" [51cf1548-03f3-44c6-b3d0-c61fd2e5daf8] Running
	I1109 13:30:16.181030   10696 system_pods.go:61] "kube-scheduler-addons-762402" [8f39a5bd-d0ee-4980-8d6c-913fa617bd89] Running
	I1109 13:30:16.181036   10696 system_pods.go:61] "metrics-server-85b7d694d7-992g6" [bf11616a-d50d-49dd-b4f5-deed14c6349b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:30:16.181045   10696 system_pods.go:61] "nvidia-device-plugin-daemonset-rrlcz" [10946875-574c-4b8c-acb0-212811b4316d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:30:16.181053   10696 system_pods.go:61] "registry-6b586f9694-xvmzk" [c2c01b58-6e2b-43ac-a6be-0f2a6af3bf46] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:30:16.181060   10696 system_pods.go:61] "registry-creds-764b6fb674-2gshl" [efb22ccb-8a7a-4184-a1eb-861b2390bf36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:30:16.181074   10696 system_pods.go:61] "registry-proxy-z7stg" [3ff06e00-5ad2-47cb-afd2-0c1ef8f5dc44] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:30:16.181083   10696 system_pods.go:61] "snapshot-controller-7d9fbc56b8-f24q2" [704ad561-732c-455f-9f0a-a6d37202431a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:16.181091   10696 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jcz8h" [c845fd60-1aa2-4c15-9601-bebc1eed8ab2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:16.181098   10696 system_pods.go:61] "storage-provisioner" [31a1cec1-234a-4701-adfb-cb2e1a5522e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:30:16.181106   10696 system_pods.go:74] duration metric: took 8.897082ms to wait for pod list to return data ...
	I1109 13:30:16.181114   10696 default_sa.go:34] waiting for default service account to be created ...
	I1109 13:30:16.185371   10696 default_sa.go:45] found service account: "default"
	I1109 13:30:16.185391   10696 default_sa.go:55] duration metric: took 4.270596ms for default service account to be created ...
	I1109 13:30:16.185401   10696 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 13:30:16.281072   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:16.282919   10696 system_pods.go:86] 20 kube-system pods found
	I1109 13:30:16.282986   10696 system_pods.go:89] "amd-gpu-device-plugin-8nlkf" [c0515672-7870-4ffa-ade6-67e738f1ba34] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1109 13:30:16.283010   10696 system_pods.go:89] "coredns-66bc5c9577-lqlkm" [467ba0f6-9cb9-4b24-bc84-f1df381f2394] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:30:16.283029   10696 system_pods.go:89] "csi-hostpath-attacher-0" [950ae3e5-88d3-427d-b6ee-dbf6049459ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:30:16.283049   10696 system_pods.go:89] "csi-hostpath-resizer-0" [ef9e64b5-fbd2-4c9a-b8d2-8fa1712a7f2b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1109 13:30:16.283068   10696 system_pods.go:89] "csi-hostpathplugin-77pp6" [1b925317-a091-4b5c-b511-1d166aa717c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:30:16.283077   10696 system_pods.go:89] "etcd-addons-762402" [a4ed6417-0e73-4b80-8a1b-dff340148bca] Running
	I1109 13:30:16.283084   10696 system_pods.go:89] "kindnet-qcnps" [7eafef21-09ff-47a7-aff8-f25939707e51] Running
	I1109 13:30:16.283090   10696 system_pods.go:89] "kube-apiserver-addons-762402" [79d2841f-44e3-45a6-8315-7af6c941659e] Running
	I1109 13:30:16.283095   10696 system_pods.go:89] "kube-controller-manager-addons-762402" [7305d7af-58fb-4343-9207-62d64eede80c] Running
	I1109 13:30:16.283105   10696 system_pods.go:89] "kube-ingress-dns-minikube" [f4e40e66-3fff-46bc-a4f1-38f02c8f1754] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:30:16.283110   10696 system_pods.go:89] "kube-proxy-8b626" [51cf1548-03f3-44c6-b3d0-c61fd2e5daf8] Running
	I1109 13:30:16.283118   10696 system_pods.go:89] "kube-scheduler-addons-762402" [8f39a5bd-d0ee-4980-8d6c-913fa617bd89] Running
	I1109 13:30:16.283126   10696 system_pods.go:89] "metrics-server-85b7d694d7-992g6" [bf11616a-d50d-49dd-b4f5-deed14c6349b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:30:16.283135   10696 system_pods.go:89] "nvidia-device-plugin-daemonset-rrlcz" [10946875-574c-4b8c-acb0-212811b4316d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:30:16.283143   10696 system_pods.go:89] "registry-6b586f9694-xvmzk" [c2c01b58-6e2b-43ac-a6be-0f2a6af3bf46] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:30:16.283152   10696 system_pods.go:89] "registry-creds-764b6fb674-2gshl" [efb22ccb-8a7a-4184-a1eb-861b2390bf36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:30:16.283159   10696 system_pods.go:89] "registry-proxy-z7stg" [3ff06e00-5ad2-47cb-afd2-0c1ef8f5dc44] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:30:16.283167   10696 system_pods.go:89] "snapshot-controller-7d9fbc56b8-f24q2" [704ad561-732c-455f-9f0a-a6d37202431a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:16.283175   10696 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jcz8h" [c845fd60-1aa2-4c15-9601-bebc1eed8ab2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:16.283184   10696 system_pods.go:89] "storage-provisioner" [31a1cec1-234a-4701-adfb-cb2e1a5522e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:30:16.283203   10696 retry.go:31] will retry after 207.228037ms: missing components: kube-dns
	I1109 13:30:16.379595   10696 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1109 13:30:16.379617   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:16.403046   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:16.403169   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:16.499868   10696 system_pods.go:86] 20 kube-system pods found
	I1109 13:30:16.499915   10696 system_pods.go:89] "amd-gpu-device-plugin-8nlkf" [c0515672-7870-4ffa-ade6-67e738f1ba34] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1109 13:30:16.499925   10696 system_pods.go:89] "coredns-66bc5c9577-lqlkm" [467ba0f6-9cb9-4b24-bc84-f1df381f2394] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:30:16.499934   10696 system_pods.go:89] "csi-hostpath-attacher-0" [950ae3e5-88d3-427d-b6ee-dbf6049459ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:30:16.499942   10696 system_pods.go:89] "csi-hostpath-resizer-0" [ef9e64b5-fbd2-4c9a-b8d2-8fa1712a7f2b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1109 13:30:16.499950   10696 system_pods.go:89] "csi-hostpathplugin-77pp6" [1b925317-a091-4b5c-b511-1d166aa717c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:30:16.499958   10696 system_pods.go:89] "etcd-addons-762402" [a4ed6417-0e73-4b80-8a1b-dff340148bca] Running
	I1109 13:30:16.499964   10696 system_pods.go:89] "kindnet-qcnps" [7eafef21-09ff-47a7-aff8-f25939707e51] Running
	I1109 13:30:16.499970   10696 system_pods.go:89] "kube-apiserver-addons-762402" [79d2841f-44e3-45a6-8315-7af6c941659e] Running
	I1109 13:30:16.499975   10696 system_pods.go:89] "kube-controller-manager-addons-762402" [7305d7af-58fb-4343-9207-62d64eede80c] Running
	I1109 13:30:16.499984   10696 system_pods.go:89] "kube-ingress-dns-minikube" [f4e40e66-3fff-46bc-a4f1-38f02c8f1754] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:30:16.499989   10696 system_pods.go:89] "kube-proxy-8b626" [51cf1548-03f3-44c6-b3d0-c61fd2e5daf8] Running
	I1109 13:30:16.499995   10696 system_pods.go:89] "kube-scheduler-addons-762402" [8f39a5bd-d0ee-4980-8d6c-913fa617bd89] Running
	I1109 13:30:16.500002   10696 system_pods.go:89] "metrics-server-85b7d694d7-992g6" [bf11616a-d50d-49dd-b4f5-deed14c6349b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:30:16.500011   10696 system_pods.go:89] "nvidia-device-plugin-daemonset-rrlcz" [10946875-574c-4b8c-acb0-212811b4316d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:30:16.500021   10696 system_pods.go:89] "registry-6b586f9694-xvmzk" [c2c01b58-6e2b-43ac-a6be-0f2a6af3bf46] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:30:16.500028   10696 system_pods.go:89] "registry-creds-764b6fb674-2gshl" [efb22ccb-8a7a-4184-a1eb-861b2390bf36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:30:16.500048   10696 system_pods.go:89] "registry-proxy-z7stg" [3ff06e00-5ad2-47cb-afd2-0c1ef8f5dc44] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:30:16.500057   10696 system_pods.go:89] "snapshot-controller-7d9fbc56b8-f24q2" [704ad561-732c-455f-9f0a-a6d37202431a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:16.500066   10696 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jcz8h" [c845fd60-1aa2-4c15-9601-bebc1eed8ab2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:16.500073   10696 system_pods.go:89] "storage-provisioner" [31a1cec1-234a-4701-adfb-cb2e1a5522e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:30:16.500089   10696 retry.go:31] will retry after 251.088942ms: missing components: kube-dns
	I1109 13:30:16.707410   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:16.755591   10696 system_pods.go:86] 20 kube-system pods found
	I1109 13:30:16.755629   10696 system_pods.go:89] "amd-gpu-device-plugin-8nlkf" [c0515672-7870-4ffa-ade6-67e738f1ba34] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1109 13:30:16.755657   10696 system_pods.go:89] "coredns-66bc5c9577-lqlkm" [467ba0f6-9cb9-4b24-bc84-f1df381f2394] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:30:16.755668   10696 system_pods.go:89] "csi-hostpath-attacher-0" [950ae3e5-88d3-427d-b6ee-dbf6049459ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:30:16.755678   10696 system_pods.go:89] "csi-hostpath-resizer-0" [ef9e64b5-fbd2-4c9a-b8d2-8fa1712a7f2b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1109 13:30:16.755688   10696 system_pods.go:89] "csi-hostpathplugin-77pp6" [1b925317-a091-4b5c-b511-1d166aa717c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:30:16.755694   10696 system_pods.go:89] "etcd-addons-762402" [a4ed6417-0e73-4b80-8a1b-dff340148bca] Running
	I1109 13:30:16.755701   10696 system_pods.go:89] "kindnet-qcnps" [7eafef21-09ff-47a7-aff8-f25939707e51] Running
	I1109 13:30:16.755706   10696 system_pods.go:89] "kube-apiserver-addons-762402" [79d2841f-44e3-45a6-8315-7af6c941659e] Running
	I1109 13:30:16.755712   10696 system_pods.go:89] "kube-controller-manager-addons-762402" [7305d7af-58fb-4343-9207-62d64eede80c] Running
	I1109 13:30:16.755725   10696 system_pods.go:89] "kube-ingress-dns-minikube" [f4e40e66-3fff-46bc-a4f1-38f02c8f1754] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:30:16.755731   10696 system_pods.go:89] "kube-proxy-8b626" [51cf1548-03f3-44c6-b3d0-c61fd2e5daf8] Running
	I1109 13:30:16.755736   10696 system_pods.go:89] "kube-scheduler-addons-762402" [8f39a5bd-d0ee-4980-8d6c-913fa617bd89] Running
	I1109 13:30:16.755744   10696 system_pods.go:89] "metrics-server-85b7d694d7-992g6" [bf11616a-d50d-49dd-b4f5-deed14c6349b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:30:16.755754   10696 system_pods.go:89] "nvidia-device-plugin-daemonset-rrlcz" [10946875-574c-4b8c-acb0-212811b4316d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:30:16.755762   10696 system_pods.go:89] "registry-6b586f9694-xvmzk" [c2c01b58-6e2b-43ac-a6be-0f2a6af3bf46] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:30:16.755774   10696 system_pods.go:89] "registry-creds-764b6fb674-2gshl" [efb22ccb-8a7a-4184-a1eb-861b2390bf36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:30:16.755782   10696 system_pods.go:89] "registry-proxy-z7stg" [3ff06e00-5ad2-47cb-afd2-0c1ef8f5dc44] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:30:16.755795   10696 system_pods.go:89] "snapshot-controller-7d9fbc56b8-f24q2" [704ad561-732c-455f-9f0a-a6d37202431a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:16.755806   10696 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jcz8h" [c845fd60-1aa2-4c15-9601-bebc1eed8ab2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:16.755814   10696 system_pods.go:89] "storage-provisioner" [31a1cec1-234a-4701-adfb-cb2e1a5522e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:30:16.755832   10696 retry.go:31] will retry after 455.996461ms: missing components: kube-dns
	I1109 13:30:16.793452   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:16.903352   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:16.903413   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:17.207298   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:17.215391   10696 system_pods.go:86] 20 kube-system pods found
	I1109 13:30:17.215422   10696 system_pods.go:89] "amd-gpu-device-plugin-8nlkf" [c0515672-7870-4ffa-ade6-67e738f1ba34] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1109 13:30:17.215429   10696 system_pods.go:89] "coredns-66bc5c9577-lqlkm" [467ba0f6-9cb9-4b24-bc84-f1df381f2394] Running
	I1109 13:30:17.215436   10696 system_pods.go:89] "csi-hostpath-attacher-0" [950ae3e5-88d3-427d-b6ee-dbf6049459ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:30:17.215441   10696 system_pods.go:89] "csi-hostpath-resizer-0" [ef9e64b5-fbd2-4c9a-b8d2-8fa1712a7f2b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1109 13:30:17.215447   10696 system_pods.go:89] "csi-hostpathplugin-77pp6" [1b925317-a091-4b5c-b511-1d166aa717c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:30:17.215451   10696 system_pods.go:89] "etcd-addons-762402" [a4ed6417-0e73-4b80-8a1b-dff340148bca] Running
	I1109 13:30:17.215455   10696 system_pods.go:89] "kindnet-qcnps" [7eafef21-09ff-47a7-aff8-f25939707e51] Running
	I1109 13:30:17.215459   10696 system_pods.go:89] "kube-apiserver-addons-762402" [79d2841f-44e3-45a6-8315-7af6c941659e] Running
	I1109 13:30:17.215462   10696 system_pods.go:89] "kube-controller-manager-addons-762402" [7305d7af-58fb-4343-9207-62d64eede80c] Running
	I1109 13:30:17.215466   10696 system_pods.go:89] "kube-ingress-dns-minikube" [f4e40e66-3fff-46bc-a4f1-38f02c8f1754] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:30:17.215471   10696 system_pods.go:89] "kube-proxy-8b626" [51cf1548-03f3-44c6-b3d0-c61fd2e5daf8] Running
	I1109 13:30:17.215475   10696 system_pods.go:89] "kube-scheduler-addons-762402" [8f39a5bd-d0ee-4980-8d6c-913fa617bd89] Running
	I1109 13:30:17.215480   10696 system_pods.go:89] "metrics-server-85b7d694d7-992g6" [bf11616a-d50d-49dd-b4f5-deed14c6349b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:30:17.215487   10696 system_pods.go:89] "nvidia-device-plugin-daemonset-rrlcz" [10946875-574c-4b8c-acb0-212811b4316d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:30:17.215492   10696 system_pods.go:89] "registry-6b586f9694-xvmzk" [c2c01b58-6e2b-43ac-a6be-0f2a6af3bf46] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:30:17.215497   10696 system_pods.go:89] "registry-creds-764b6fb674-2gshl" [efb22ccb-8a7a-4184-a1eb-861b2390bf36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:30:17.215504   10696 system_pods.go:89] "registry-proxy-z7stg" [3ff06e00-5ad2-47cb-afd2-0c1ef8f5dc44] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:30:17.215509   10696 system_pods.go:89] "snapshot-controller-7d9fbc56b8-f24q2" [704ad561-732c-455f-9f0a-a6d37202431a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:17.215516   10696 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jcz8h" [c845fd60-1aa2-4c15-9601-bebc1eed8ab2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:17.215522   10696 system_pods.go:89] "storage-provisioner" [31a1cec1-234a-4701-adfb-cb2e1a5522e0] Running
	I1109 13:30:17.215529   10696 system_pods.go:126] duration metric: took 1.030122205s to wait for k8s-apps to be running ...
	I1109 13:30:17.215536   10696 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 13:30:17.215573   10696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:30:17.227825   10696 system_svc.go:56] duration metric: took 12.281992ms WaitForService to wait for kubelet
	I1109 13:30:17.227851   10696 kubeadm.go:587] duration metric: took 42.777420022s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:30:17.227872   10696 node_conditions.go:102] verifying NodePressure condition ...
	I1109 13:30:17.230044   10696 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 13:30:17.230078   10696 node_conditions.go:123] node cpu capacity is 8
	I1109 13:30:17.230092   10696 node_conditions.go:105] duration metric: took 2.210112ms to run NodePressure ...
	I1109 13:30:17.230109   10696 start.go:242] waiting for startup goroutines ...
	I1109 13:30:17.308431   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:17.402844   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:17.402920   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:17.707311   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:17.793410   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:17.903416   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:17.903477   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:18.207855   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:18.293709   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:18.403305   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:18.403401   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:18.706547   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:18.793763   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:18.903061   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:18.903131   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:19.208710   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:19.294741   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:19.403548   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:19.404980   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:19.707089   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:19.806814   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:19.902355   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:19.903067   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:20.206856   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:20.293351   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:20.403668   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:20.403738   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:20.707335   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:20.794301   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:20.903158   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:20.903227   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:21.207154   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:21.293972   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:21.402981   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:21.403842   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:21.706619   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:21.794607   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:21.903613   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:21.903890   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:22.206484   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:22.294288   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:22.403748   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:22.403777   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:22.707310   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:22.794244   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:22.902937   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:22.903030   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:23.206323   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:23.293408   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:23.403617   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:23.403674   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:23.706766   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:23.793107   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:23.903100   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:23.903867   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:24.206953   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:24.294023   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:24.405980   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:24.407049   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:24.706992   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:24.793848   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:24.903560   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:24.903655   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:25.207513   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:25.294116   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:25.403251   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:25.403731   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:25.707401   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:25.793964   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:25.902668   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:25.903296   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:26.206788   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:26.292703   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:26.403274   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:26.403348   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:26.706552   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:26.795075   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:26.903040   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:26.903492   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:27.207865   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:27.294087   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:27.403088   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:27.406245   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:27.706421   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:27.794244   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:27.903042   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:27.903155   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:28.207360   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:28.307539   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:28.531400   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:28.531598   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:28.706813   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:28.792845   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:28.903167   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:28.903315   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:29.206921   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:29.293340   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:29.404107   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:29.404594   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:29.706379   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:29.793866   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:29.902393   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:29.903254   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:30.210588   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:30.294351   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:30.403252   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:30.403446   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:30.707015   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:30.793198   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:30.903049   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:30.903714   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:31.207341   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:31.293996   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:31.402775   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:31.403432   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:31.706894   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:31.807272   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:31.907677   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:31.907701   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:32.206323   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:32.293946   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:32.402702   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:32.403173   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:32.706749   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:32.794230   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:32.903195   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:32.903214   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:33.207181   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:33.293563   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:33.403296   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:33.403294   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:33.707101   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:33.793516   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:33.902708   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:33.902857   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:34.206899   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:34.307043   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:34.407940   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:34.408062   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:34.706327   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:34.793376   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:34.903015   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:34.903028   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:35.207082   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:35.293705   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:35.403716   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:35.403926   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:35.707989   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:35.793874   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:35.902729   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:35.903272   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:36.207188   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:36.293831   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:36.403353   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:36.403430   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:36.707219   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:36.793811   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:36.903568   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:36.903623   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:37.207362   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:37.293381   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:37.402547   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:37.402733   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:37.706975   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:37.793358   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:37.903229   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:37.903315   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:38.207323   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:38.308162   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:38.402382   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:38.403254   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:38.707593   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:38.794311   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:38.903483   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:38.903521   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:39.207320   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:39.293275   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:39.402945   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:39.403082   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:39.706497   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:39.793999   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:39.902544   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:39.903355   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:40.207490   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:40.294151   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:40.403546   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:40.403718   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:40.710422   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:40.794453   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:40.906173   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:40.906536   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:41.281050   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:41.294242   10696 kapi.go:107] duration metric: took 1m5.503565075s to wait for kubernetes.io/minikube-addons=registry ...
	I1109 13:30:41.402844   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:41.402934   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:41.754949   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:41.902978   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:41.903778   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:42.206912   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:42.402597   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:42.403587   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:42.706800   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:42.902892   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:42.903362   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:43.206827   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:43.403377   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:43.403551   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:43.706719   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:43.903384   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:43.903419   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:44.207297   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:44.403306   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:44.403349   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:44.707547   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:44.903447   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:44.903524   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:45.206955   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:45.402557   10696 kapi.go:107] duration metric: took 1m9.002850514s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1109 13:30:45.403940   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:45.740232   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:45.904339   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:46.207090   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:46.404375   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:46.846381   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:46.903812   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:47.206946   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:47.404265   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:47.707418   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:47.904515   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:48.206521   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:48.403197   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:48.707633   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:48.904414   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:49.207025   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:49.403673   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:49.707232   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:49.903889   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:50.207112   10696 kapi.go:107] duration metric: took 1m7.503446673s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1109 13:30:50.208544   10696 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-762402 cluster.
	I1109 13:30:50.209744   10696 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1109 13:30:50.210997   10696 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1109 13:30:50.404610   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:50.903403   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:51.404066   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:51.904063   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:52.403267   10696 kapi.go:107] duration metric: took 1m16.002779738s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1109 13:30:52.404684   10696 out.go:179] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, registry-creds, amd-gpu-device-plugin, default-storageclass, inspektor-gadget, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1109 13:30:52.405663   10696 addons.go:515] duration metric: took 1m17.955207788s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner registry-creds amd-gpu-device-plugin default-storageclass inspektor-gadget nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1109 13:30:52.405710   10696 start.go:247] waiting for cluster config update ...
	I1109 13:30:52.405735   10696 start.go:256] writing updated cluster config ...
	I1109 13:30:52.405999   10696 ssh_runner.go:195] Run: rm -f paused
	I1109 13:30:52.409879   10696 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 13:30:52.412342   10696 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lqlkm" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:52.415631   10696 pod_ready.go:94] pod "coredns-66bc5c9577-lqlkm" is "Ready"
	I1109 13:30:52.415658   10696 pod_ready.go:86] duration metric: took 3.29574ms for pod "coredns-66bc5c9577-lqlkm" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:52.417162   10696 pod_ready.go:83] waiting for pod "etcd-addons-762402" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:52.420179   10696 pod_ready.go:94] pod "etcd-addons-762402" is "Ready"
	I1109 13:30:52.420195   10696 pod_ready.go:86] duration metric: took 3.015876ms for pod "etcd-addons-762402" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:52.421761   10696 pod_ready.go:83] waiting for pod "kube-apiserver-addons-762402" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:52.424705   10696 pod_ready.go:94] pod "kube-apiserver-addons-762402" is "Ready"
	I1109 13:30:52.424720   10696 pod_ready.go:86] duration metric: took 2.944011ms for pod "kube-apiserver-addons-762402" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:52.426125   10696 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-762402" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:52.812725   10696 pod_ready.go:94] pod "kube-controller-manager-addons-762402" is "Ready"
	I1109 13:30:52.812752   10696 pod_ready.go:86] duration metric: took 386.612063ms for pod "kube-controller-manager-addons-762402" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:53.013486   10696 pod_ready.go:83] waiting for pod "kube-proxy-8b626" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:53.413156   10696 pod_ready.go:94] pod "kube-proxy-8b626" is "Ready"
	I1109 13:30:53.413183   10696 pod_ready.go:86] duration metric: took 399.668469ms for pod "kube-proxy-8b626" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:53.613742   10696 pod_ready.go:83] waiting for pod "kube-scheduler-addons-762402" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:54.013593   10696 pod_ready.go:94] pod "kube-scheduler-addons-762402" is "Ready"
	I1109 13:30:54.013620   10696 pod_ready.go:86] duration metric: took 399.854464ms for pod "kube-scheduler-addons-762402" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:54.013636   10696 pod_ready.go:40] duration metric: took 1.603734073s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 13:30:54.056474   10696 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 13:30:54.058246   10696 out.go:179] * Done! kubectl is now configured to use "addons-762402" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 13:33:34 addons-762402 crio[772]: time="2025-11-09T13:33:34.840130155Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-7bqrw/POD" id=081be967-db18-445c-8b52-037152bad1e4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 13:33:34 addons-762402 crio[772]: time="2025-11-09T13:33:34.840234348Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 13:33:34 addons-762402 crio[772]: time="2025-11-09T13:33:34.846373406Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-7bqrw Namespace:default ID:b207ef04d9cc30aab17b85a1697a186075170abe5cbf4c053c866f0a0bff09c1 UID:d44bf62d-b9a8-40b0-ab69-a238a91d3b31 NetNS:/var/run/netns/6d077224-0edd-4619-9f25-4de57a783374 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0007b8c40}] Aliases:map[]}"
	Nov 09 13:33:34 addons-762402 crio[772]: time="2025-11-09T13:33:34.846408912Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-7bqrw to CNI network \"kindnet\" (type=ptp)"
	Nov 09 13:33:34 addons-762402 crio[772]: time="2025-11-09T13:33:34.857996656Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-7bqrw Namespace:default ID:b207ef04d9cc30aab17b85a1697a186075170abe5cbf4c053c866f0a0bff09c1 UID:d44bf62d-b9a8-40b0-ab69-a238a91d3b31 NetNS:/var/run/netns/6d077224-0edd-4619-9f25-4de57a783374 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0007b8c40}] Aliases:map[]}"
	Nov 09 13:33:34 addons-762402 crio[772]: time="2025-11-09T13:33:34.858110533Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-7bqrw for CNI network kindnet (type=ptp)"
	Nov 09 13:33:34 addons-762402 crio[772]: time="2025-11-09T13:33:34.8589016Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 09 13:33:34 addons-762402 crio[772]: time="2025-11-09T13:33:34.85962265Z" level=info msg="Ran pod sandbox b207ef04d9cc30aab17b85a1697a186075170abe5cbf4c053c866f0a0bff09c1 with infra container: default/hello-world-app-5d498dc89-7bqrw/POD" id=081be967-db18-445c-8b52-037152bad1e4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 13:33:34 addons-762402 crio[772]: time="2025-11-09T13:33:34.8606233Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=4c2b5aef-c551-43f7-99e1-63528a8e964d name=/runtime.v1.ImageService/ImageStatus
	Nov 09 13:33:34 addons-762402 crio[772]: time="2025-11-09T13:33:34.860762829Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=4c2b5aef-c551-43f7-99e1-63528a8e964d name=/runtime.v1.ImageService/ImageStatus
	Nov 09 13:33:34 addons-762402 crio[772]: time="2025-11-09T13:33:34.860794087Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=4c2b5aef-c551-43f7-99e1-63528a8e964d name=/runtime.v1.ImageService/ImageStatus
	Nov 09 13:33:34 addons-762402 crio[772]: time="2025-11-09T13:33:34.861318146Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=55ded381-9051-439f-bd97-17163529bd15 name=/runtime.v1.ImageService/PullImage
	Nov 09 13:33:34 addons-762402 crio[772]: time="2025-11-09T13:33:34.865543905Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 09 13:33:35 addons-762402 crio[772]: time="2025-11-09T13:33:35.644211496Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=55ded381-9051-439f-bd97-17163529bd15 name=/runtime.v1.ImageService/PullImage
	Nov 09 13:33:35 addons-762402 crio[772]: time="2025-11-09T13:33:35.644796086Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f488bc58-013a-49b3-90c7-62011c79ca4a name=/runtime.v1.ImageService/ImageStatus
	Nov 09 13:33:35 addons-762402 crio[772]: time="2025-11-09T13:33:35.646371183Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=1e2e26ba-e8bd-4faa-a46e-216a532a4217 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 13:33:35 addons-762402 crio[772]: time="2025-11-09T13:33:35.649909429Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-7bqrw/hello-world-app" id=a3a63469-6243-47a4-b00f-cbb1a174890b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 13:33:35 addons-762402 crio[772]: time="2025-11-09T13:33:35.650039523Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 13:33:35 addons-762402 crio[772]: time="2025-11-09T13:33:35.656074056Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 13:33:35 addons-762402 crio[772]: time="2025-11-09T13:33:35.656258046Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/de0bf66a49fb33b4f28bef24befb50fff10f218de610e5314d089c5e62d3bcc3/merged/etc/passwd: no such file or directory"
	Nov 09 13:33:35 addons-762402 crio[772]: time="2025-11-09T13:33:35.656290375Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/de0bf66a49fb33b4f28bef24befb50fff10f218de610e5314d089c5e62d3bcc3/merged/etc/group: no such file or directory"
	Nov 09 13:33:35 addons-762402 crio[772]: time="2025-11-09T13:33:35.656585345Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 13:33:35 addons-762402 crio[772]: time="2025-11-09T13:33:35.692847195Z" level=info msg="Created container 3c8cabac68013e1b315750a72c2ed9b5a9bccadaed76e89fd33d4eeffbb2b4b4: default/hello-world-app-5d498dc89-7bqrw/hello-world-app" id=a3a63469-6243-47a4-b00f-cbb1a174890b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 13:33:35 addons-762402 crio[772]: time="2025-11-09T13:33:35.693374069Z" level=info msg="Starting container: 3c8cabac68013e1b315750a72c2ed9b5a9bccadaed76e89fd33d4eeffbb2b4b4" id=b2fc22ef-e25c-4a1f-8b9a-b21ab090b74e name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 13:33:35 addons-762402 crio[772]: time="2025-11-09T13:33:35.695056073Z" level=info msg="Started container" PID=9743 containerID=3c8cabac68013e1b315750a72c2ed9b5a9bccadaed76e89fd33d4eeffbb2b4b4 description=default/hello-world-app-5d498dc89-7bqrw/hello-world-app id=b2fc22ef-e25c-4a1f-8b9a-b21ab090b74e name=/runtime.v1.RuntimeService/StartContainer sandboxID=b207ef04d9cc30aab17b85a1697a186075170abe5cbf4c053c866f0a0bff09c1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	3c8cabac68013       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   b207ef04d9cc3       hello-world-app-5d498dc89-7bqrw             default
	d7738655acb84       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago            Running             registry-creds                           0                   c996c55c35f2c       registry-creds-764b6fb674-2gshl             kube-system
	e7ace7c7f819f       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago            Running             nginx                                    0                   db8f2baec57cc       nginx                                       default
	011e3c5303bab       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   f9369e0852b64       busybox                                     default
	af27443b9f896       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   4536052697a35       csi-hostpathplugin-77pp6                    kube-system
	faa104590cba3       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   4536052697a35       csi-hostpathplugin-77pp6                    kube-system
	4e995ec9dee84       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   4536052697a35       csi-hostpathplugin-77pp6                    kube-system
	8500f930e3f9e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   29df20bcc4c92       gcp-auth-78565c9fb4-6bbn8                   gcp-auth
	b930aa6a12030       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   4536052697a35       csi-hostpathplugin-77pp6                    kube-system
	3c81403e30d89       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            2 minutes ago            Running             gadget                                   0                   83e5aa4c4fcda       gadget-d5mhg                                gadget
	d67da63b5ee91       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   4536052697a35       csi-hostpathplugin-77pp6                    kube-system
	96aed698532bb       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             2 minutes ago            Running             controller                               0                   4c56016694fb0       ingress-nginx-controller-675c5ddd98-6jkpc   ingress-nginx
	0aaed23bb5d29       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   99b3c34b1d144       registry-proxy-z7stg                        kube-system
	08f89d0732d38       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   2 minutes ago            Running             csi-external-health-monitor-controller   0                   4536052697a35       csi-hostpathplugin-77pp6                    kube-system
	2afddde1486e2       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     2 minutes ago            Running             amd-gpu-device-plugin                    0                   3bfdd366bfc78       amd-gpu-device-plugin-8nlkf                 kube-system
	c2e8f6876246e       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     2 minutes ago            Running             nvidia-device-plugin-ctr                 0                   8b86bcd173c24       nvidia-device-plugin-daemonset-rrlcz        kube-system
	c947eeaaf49bf       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   978af21e1ae4d       snapshot-controller-7d9fbc56b8-jcz8h        kube-system
	3c9fabcff63aa       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   b9622b93a6fef       csi-hostpath-resizer-0                      kube-system
	6325296d296d9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              patch                                    0                   0f04a4fdedfa5       ingress-nginx-admission-patch-f6fqd         ingress-nginx
	057fd0d666013       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   2bdc8b188c8d3       csi-hostpath-attacher-0                     kube-system
	5e49dd8732922       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   c0311eb2628f9       snapshot-controller-7d9fbc56b8-f24q2        kube-system
	2a8876a52c7ae       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              create                                   0                   8f27ad33070ce       ingress-nginx-admission-create-l24wm        ingress-nginx
	403c426d75120       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   38aeac4681dc2       registry-6b586f9694-xvmzk                   kube-system
	9bf784651f15b       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   0b2b2a6278449       yakd-dashboard-5ff678cb9-6fdjm              yakd-dashboard
	72e148959a3f5       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               3 minutes ago            Running             cloud-spanner-emulator                   0                   892af68365186       cloud-spanner-emulator-6f9fcf858b-bs44j     default
	8c332e600a86f       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   f8946f6cd2082       kube-ingress-dns-minikube                   kube-system
	6d09ceddae1c8       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   1896c514f17fd       local-path-provisioner-648f6765c9-xqxbg     local-path-storage
	e93137eb9f506       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   4253ff216667f       metrics-server-85b7d694d7-992g6             kube-system
	befdac5dae601       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   716919a1cc029       storage-provisioner                         kube-system
	62effce4d4405       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   4ae95db059286       coredns-66bc5c9577-lqlkm                    kube-system
	79692b5ce1377       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago            Running             kube-proxy                               0                   cae308b8f8421       kube-proxy-8b626                            kube-system
	5af868c65929f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   5a66c977c864e       kindnet-qcnps                               kube-system
	5bb7efe058cec       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   26fc1815b1cc4       kube-apiserver-addons-762402                kube-system
	861090a7ec881       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   690bac2e18b57       kube-controller-manager-addons-762402       kube-system
	790087032ffe1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   353dede2d9cfd       kube-scheduler-addons-762402                kube-system
	09ed3ea084064       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   4c4c0e0ebbaec       etcd-addons-762402                          kube-system
	
	
	==> coredns [62effce4d440574a561ee8bb4713a143cff8c4dd5102c8837cbcb323a55c6f8e] <==
	[INFO] 10.244.0.22:42651 - 48373 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004433874s
	[INFO] 10.244.0.22:60321 - 9920 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004138559s
	[INFO] 10.244.0.22:34186 - 36428 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004200207s
	[INFO] 10.244.0.22:53566 - 18775 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00376843s
	[INFO] 10.244.0.22:50242 - 34733 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00536655s
	[INFO] 10.244.0.22:55551 - 40005 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000792326s
	[INFO] 10.244.0.22:38405 - 43085 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002139968s
	[INFO] 10.244.0.28:48201 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000203553s
	[INFO] 10.244.0.28:36527 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00014297s
	[INFO] 10.244.0.29:57425 - 27196 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000211809s
	[INFO] 10.244.0.29:51919 - 43491 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000306182s
	[INFO] 10.244.0.29:42375 - 27763 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000167411s
	[INFO] 10.244.0.29:44027 - 812 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000229011s
	[INFO] 10.244.0.29:34684 - 42075 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.00009296s
	[INFO] 10.244.0.29:59386 - 13266 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000118394s
	[INFO] 10.244.0.29:46548 - 6959 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.004923629s
	[INFO] 10.244.0.29:36322 - 53638 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.005276309s
	[INFO] 10.244.0.29:49659 - 31070 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.005213625s
	[INFO] 10.244.0.29:56650 - 59968 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.006223611s
	[INFO] 10.244.0.29:47976 - 5112 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005384403s
	[INFO] 10.244.0.29:48201 - 24666 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005380938s
	[INFO] 10.244.0.29:57063 - 33460 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004323213s
	[INFO] 10.244.0.29:51938 - 47589 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004581461s
	[INFO] 10.244.0.29:50847 - 27559 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001615628s
	[INFO] 10.244.0.29:49208 - 42243 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001638587s
	
	
	==> describe nodes <==
	Name:               addons-762402
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-762402
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=addons-762402
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T13_29_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-762402
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-762402"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:29:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-762402
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 13:33:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 13:32:42 +0000   Sun, 09 Nov 2025 13:29:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 13:32:42 +0000   Sun, 09 Nov 2025 13:29:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 13:32:42 +0000   Sun, 09 Nov 2025 13:29:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 13:32:42 +0000   Sun, 09 Nov 2025 13:30:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-762402
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                fefadf4c-cb63-48e2-9144-41b567f755ed
	  Boot ID:                    f61b57f8-a461-4dd6-b921-37a11bd88f0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m42s
	  default                     cloud-spanner-emulator-6f9fcf858b-bs44j      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  default                     hello-world-app-5d498dc89-7bqrw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-d5mhg                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  gcp-auth                    gcp-auth-78565c9fb4-6bbn8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-6jkpc    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m
	  kube-system                 amd-gpu-device-plugin-8nlkf                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m21s
	  kube-system                 coredns-66bc5c9577-lqlkm                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m2s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 csi-hostpathplugin-77pp6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m21s
	  kube-system                 etcd-addons-762402                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m8s
	  kube-system                 kindnet-qcnps                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m2s
	  kube-system                 kube-apiserver-addons-762402                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-controller-manager-addons-762402        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-proxy-8b626                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-scheduler-addons-762402                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 metrics-server-85b7d694d7-992g6              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m1s
	  kube-system                 nvidia-device-plugin-daemonset-rrlcz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m21s
	  kube-system                 registry-6b586f9694-xvmzk                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 registry-creds-764b6fb674-2gshl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 registry-proxy-z7stg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m21s
	  kube-system                 snapshot-controller-7d9fbc56b8-f24q2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 snapshot-controller-7d9fbc56b8-jcz8h         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  local-path-storage          local-path-provisioner-648f6765c9-xqxbg      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-6fdjm               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m     kube-proxy       
	  Normal  Starting                 4m8s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s   kubelet          Node addons-762402 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s   kubelet          Node addons-762402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s   kubelet          Node addons-762402 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m3s   node-controller  Node addons-762402 event: Registered Node addons-762402 in Controller
	  Normal  NodeReady                3m21s  kubelet          Node addons-762402 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.090313] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.571631] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 9 13:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.019189] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023897] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +2.047759] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +4.031604] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +8.447123] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[ +16.382306] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 13:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	
	
	==> etcd [09ed3ea08406406e4a9fa9f5b1bf6bfff6a0edd9133d05145b78a59503d5f47b] <==
	{"level":"warn","ts":"2025-11-09T13:29:26.053280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:26.058883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:26.064692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:26.071485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:26.077031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:26.083034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:26.088493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:26.105041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:26.110723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:26.116935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:26.164808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:36.866809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:36.873570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:03.541946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:03.561435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:03.567191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40432","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T13:30:28.529974Z","caller":"traceutil/trace.go:172","msg":"trace[175997618] linearizableReadLoop","detail":"{readStateIndex:1020; appliedIndex:1020; }","duration":"128.030434ms","start":"2025-11-09T13:30:28.401926Z","end":"2025-11-09T13:30:28.529956Z","steps":["trace[175997618] 'read index received'  (duration: 128.023461ms)","trace[175997618] 'applied index is now lower than readState.Index'  (duration: 6.023µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T13:30:28.530032Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.080167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T13:30:28.530101Z","caller":"traceutil/trace.go:172","msg":"trace[349352788] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:990; }","duration":"128.168336ms","start":"2025-11-09T13:30:28.401922Z","end":"2025-11-09T13:30:28.530090Z","steps":["trace[349352788] 'agreement among raft nodes before linearized reading'  (duration: 128.042868ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T13:30:28.530111Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.171568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T13:30:28.530113Z","caller":"traceutil/trace.go:172","msg":"trace[1618212384] transaction","detail":"{read_only:false; response_revision:991; number_of_response:1; }","duration":"139.418953ms","start":"2025-11-09T13:30:28.390675Z","end":"2025-11-09T13:30:28.530094Z","steps":["trace[1618212384] 'process raft request'  (duration: 139.308424ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:30:28.530147Z","caller":"traceutil/trace.go:172","msg":"trace[1625001800] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:991; }","duration":"128.217404ms","start":"2025-11-09T13:30:28.401922Z","end":"2025-11-09T13:30:28.530139Z","steps":["trace[1625001800] 'agreement among raft nodes before linearized reading'  (duration: 128.136663ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:30:41.627101Z","caller":"traceutil/trace.go:172","msg":"trace[1311191680] transaction","detail":"{read_only:false; response_revision:1117; number_of_response:1; }","duration":"105.220976ms","start":"2025-11-09T13:30:41.521866Z","end":"2025-11-09T13:30:41.627087Z","steps":["trace[1311191680] 'process raft request'  (duration: 105.118212ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T13:30:46.844726Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.887389ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T13:30:46.844807Z","caller":"traceutil/trace.go:172","msg":"trace[2034938259] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1140; }","duration":"138.980773ms","start":"2025-11-09T13:30:46.705810Z","end":"2025-11-09T13:30:46.844791Z","steps":["trace[2034938259] 'range keys from in-memory index tree'  (duration: 138.826943ms)"],"step_count":1}
	
	
	==> gcp-auth [8500f930e3f9ec473752bbd1560e45502716064b6e945d23ffb9fb4c8afffd3a] <==
	2025/11/09 13:30:49 GCP Auth Webhook started!
	2025/11/09 13:30:54 Ready to marshal response ...
	2025/11/09 13:30:54 Ready to write response ...
	2025/11/09 13:30:54 Ready to marshal response ...
	2025/11/09 13:30:54 Ready to write response ...
	2025/11/09 13:30:54 Ready to marshal response ...
	2025/11/09 13:30:54 Ready to write response ...
	2025/11/09 13:31:04 Ready to marshal response ...
	2025/11/09 13:31:04 Ready to write response ...
	2025/11/09 13:31:04 Ready to marshal response ...
	2025/11/09 13:31:04 Ready to write response ...
	2025/11/09 13:31:08 Ready to marshal response ...
	2025/11/09 13:31:08 Ready to write response ...
	2025/11/09 13:31:12 Ready to marshal response ...
	2025/11/09 13:31:12 Ready to write response ...
	2025/11/09 13:31:13 Ready to marshal response ...
	2025/11/09 13:31:13 Ready to write response ...
	2025/11/09 13:31:26 Ready to marshal response ...
	2025/11/09 13:31:26 Ready to write response ...
	2025/11/09 13:31:56 Ready to marshal response ...
	2025/11/09 13:31:56 Ready to write response ...
	2025/11/09 13:33:34 Ready to marshal response ...
	2025/11/09 13:33:34 Ready to write response ...
	
	
	==> kernel <==
	 13:33:36 up 16 min,  0 user,  load average: 0.58, 0.84, 0.43
	Linux addons-762402 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5af868c65929f995aa32fd9d978e8b01f76671c795d79d2fdd5e8b5636497bb7] <==
	I1109 13:31:35.563853       1 main.go:301] handling current node
	I1109 13:31:45.565822       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:31:45.565858       1 main.go:301] handling current node
	I1109 13:31:55.568900       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:31:55.568926       1 main.go:301] handling current node
	I1109 13:32:05.564102       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:32:05.564135       1 main.go:301] handling current node
	I1109 13:32:15.563852       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:32:15.563910       1 main.go:301] handling current node
	I1109 13:32:25.568915       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:32:25.568945       1 main.go:301] handling current node
	I1109 13:32:35.563901       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:32:35.563925       1 main.go:301] handling current node
	I1109 13:32:45.564671       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:32:45.564707       1 main.go:301] handling current node
	I1109 13:32:55.568695       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:32:55.568722       1 main.go:301] handling current node
	I1109 13:33:05.564850       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:33:05.564879       1 main.go:301] handling current node
	I1109 13:33:15.565347       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:33:15.565383       1 main.go:301] handling current node
	I1109 13:33:25.568426       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:33:25.568451       1 main.go:301] handling current node
	I1109 13:33:35.563863       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:33:35.563894       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5bb7efe058cec23686f0732d66f5c759521ce0362ae0dac95c722fbe50e4ea21] <==
	W1109 13:30:03.567143       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1109 13:30:15.827065       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.121.13:443: connect: connection refused
	E1109 13:30:15.827109       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.121.13:443: connect: connection refused" logger="UnhandledError"
	W1109 13:30:15.827064       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.121.13:443: connect: connection refused
	E1109 13:30:15.827488       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.121.13:443: connect: connection refused" logger="UnhandledError"
	W1109 13:30:15.849088       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.121.13:443: connect: connection refused
	E1109 13:30:15.849130       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.121.13:443: connect: connection refused" logger="UnhandledError"
	W1109 13:30:15.849624       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.121.13:443: connect: connection refused
	E1109 13:30:15.849685       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.121.13:443: connect: connection refused" logger="UnhandledError"
	E1109 13:30:18.980264       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.46.68:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.46.68:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.46.68:443: connect: connection refused" logger="UnhandledError"
	W1109 13:30:18.980390       1 handler_proxy.go:99] no RequestInfo found in the context
	E1109 13:30:18.980457       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1109 13:30:18.980885       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.46.68:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.46.68:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.46.68:443: connect: connection refused" logger="UnhandledError"
	E1109 13:30:18.985938       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.46.68:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.46.68:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.46.68:443: connect: connection refused" logger="UnhandledError"
	E1109 13:30:19.007081       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.46.68:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.46.68:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.46.68:443: connect: connection refused" logger="UnhandledError"
	I1109 13:30:19.084129       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1109 13:31:01.676407       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59066: use of closed network connection
	E1109 13:31:01.811287       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59088: use of closed network connection
	I1109 13:31:08.570506       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1109 13:31:08.757610       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.174.119"}
	I1109 13:31:36.849439       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1109 13:33:34.605804       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.225.234"}
	
	
	==> kube-controller-manager [861090a7ec88133657d8ba74914277d902ce94237d6ff1f5d419ebee604b79da] <==
	I1109 13:29:33.526584       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 13:29:33.526604       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 13:29:33.526680       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1109 13:29:33.527810       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 13:29:33.527849       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 13:29:33.530010       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 13:29:33.533168       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 13:29:33.533194       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 13:29:33.537382       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1109 13:29:33.542599       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1109 13:29:33.546837       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 13:29:33.546900       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 13:29:33.551119       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1109 13:29:33.559396       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 13:29:33.559407       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 13:29:33.559414       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1109 13:29:35.654664       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1109 13:30:03.536492       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1109 13:30:03.536625       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1109 13:30:03.536686       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1109 13:30:03.552937       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1109 13:30:03.556381       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1109 13:30:03.637224       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 13:30:03.657401       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 13:30:18.477057       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [79692b5ce1377fdc3438f677558138b88bf7d705bda4ddca28079953b176faf6] <==
	I1109 13:29:35.120379       1 server_linux.go:53] "Using iptables proxy"
	I1109 13:29:35.352824       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 13:29:35.454295       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 13:29:35.454330       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1109 13:29:35.454405       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 13:29:35.607435       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 13:29:35.607551       1 server_linux.go:132] "Using iptables Proxier"
	I1109 13:29:35.616363       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 13:29:35.625213       1 server.go:527] "Version info" version="v1.34.1"
	I1109 13:29:35.625503       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:29:35.631387       1 config.go:200] "Starting service config controller"
	I1109 13:29:35.631452       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 13:29:35.631477       1 config.go:106] "Starting endpoint slice config controller"
	I1109 13:29:35.631482       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 13:29:35.631489       1 config.go:309] "Starting node config controller"
	I1109 13:29:35.631495       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 13:29:35.631502       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 13:29:35.631496       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 13:29:35.631510       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 13:29:35.731895       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 13:29:35.732009       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 13:29:35.734897       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [790087032ffe1bf767edeef5915a65edb0e4e1d93fd292a3746fd7e1aeb43138] <==
	E1109 13:29:26.541671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 13:29:26.542586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 13:29:26.542740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 13:29:26.542788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 13:29:26.542816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 13:29:26.542930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 13:29:26.542933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 13:29:26.542987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 13:29:26.542982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 13:29:26.543012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 13:29:26.543049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 13:29:26.543072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 13:29:26.543078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 13:29:26.543154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 13:29:26.543255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 13:29:26.543341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 13:29:27.446276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 13:29:27.579577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 13:29:27.585474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 13:29:27.609950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 13:29:27.675591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 13:29:27.696659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 13:29:27.723487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 13:29:27.754544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1109 13:29:28.040830       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 13:31:56 addons-762402 kubelet[1292]: I1109 13:31:56.937764    1292 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-3ddc37f0-8ef7-4208-be19-42db74055adc\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^7640e8f9-bd70-11f0-a45e-9a610ef250bb\") pod \"task-pv-pod-restore\" (UID: \"9a22c28d-b543-476b-bd00-148dac1b108c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/4b046ab6681be82aae5e28fd39358936fa25990c94065b6d806e0b7920021e1f/globalmount\"" pod="default/task-pv-pod-restore"
	Nov 09 13:32:01 addons-762402 kubelet[1292]: I1109 13:32:01.828319    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-z7stg" secret="" err="secret \"gcp-auth\" not found"
	Nov 09 13:32:04 addons-762402 kubelet[1292]: I1109 13:32:04.834922    1292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=8.297121509 podStartE2EDuration="8.834902584s" podCreationTimestamp="2025-11-09 13:31:56 +0000 UTC" firstStartedPulling="2025-11-09 13:31:57.103967601 +0000 UTC m=+148.353617286" lastFinishedPulling="2025-11-09 13:31:57.641748682 +0000 UTC m=+148.891398361" observedRunningTime="2025-11-09 13:31:58.419868452 +0000 UTC m=+149.669518145" watchObservedRunningTime="2025-11-09 13:32:04.834902584 +0000 UTC m=+156.084552278"
	Nov 09 13:32:05 addons-762402 kubelet[1292]: I1109 13:32:05.090601    1292 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4w54\" (UniqueName: \"kubernetes.io/projected/9a22c28d-b543-476b-bd00-148dac1b108c-kube-api-access-x4w54\") pod \"9a22c28d-b543-476b-bd00-148dac1b108c\" (UID: \"9a22c28d-b543-476b-bd00-148dac1b108c\") "
	Nov 09 13:32:05 addons-762402 kubelet[1292]: I1109 13:32:05.090747    1292 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^7640e8f9-bd70-11f0-a45e-9a610ef250bb\") pod \"9a22c28d-b543-476b-bd00-148dac1b108c\" (UID: \"9a22c28d-b543-476b-bd00-148dac1b108c\") "
	Nov 09 13:32:05 addons-762402 kubelet[1292]: I1109 13:32:05.090781    1292 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9a22c28d-b543-476b-bd00-148dac1b108c-gcp-creds\") pod \"9a22c28d-b543-476b-bd00-148dac1b108c\" (UID: \"9a22c28d-b543-476b-bd00-148dac1b108c\") "
	Nov 09 13:32:05 addons-762402 kubelet[1292]: I1109 13:32:05.090960    1292 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a22c28d-b543-476b-bd00-148dac1b108c-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "9a22c28d-b543-476b-bd00-148dac1b108c" (UID: "9a22c28d-b543-476b-bd00-148dac1b108c"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 09 13:32:05 addons-762402 kubelet[1292]: I1109 13:32:05.092858    1292 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a22c28d-b543-476b-bd00-148dac1b108c-kube-api-access-x4w54" (OuterVolumeSpecName: "kube-api-access-x4w54") pod "9a22c28d-b543-476b-bd00-148dac1b108c" (UID: "9a22c28d-b543-476b-bd00-148dac1b108c"). InnerVolumeSpecName "kube-api-access-x4w54". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 09 13:32:05 addons-762402 kubelet[1292]: I1109 13:32:05.093764    1292 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^7640e8f9-bd70-11f0-a45e-9a610ef250bb" (OuterVolumeSpecName: "task-pv-storage") pod "9a22c28d-b543-476b-bd00-148dac1b108c" (UID: "9a22c28d-b543-476b-bd00-148dac1b108c"). InnerVolumeSpecName "pvc-3ddc37f0-8ef7-4208-be19-42db74055adc". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 09 13:32:05 addons-762402 kubelet[1292]: I1109 13:32:05.191614    1292 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x4w54\" (UniqueName: \"kubernetes.io/projected/9a22c28d-b543-476b-bd00-148dac1b108c-kube-api-access-x4w54\") on node \"addons-762402\" DevicePath \"\""
	Nov 09 13:32:05 addons-762402 kubelet[1292]: I1109 13:32:05.191708    1292 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-3ddc37f0-8ef7-4208-be19-42db74055adc\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^7640e8f9-bd70-11f0-a45e-9a610ef250bb\") on node \"addons-762402\" "
	Nov 09 13:32:05 addons-762402 kubelet[1292]: I1109 13:32:05.191726    1292 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9a22c28d-b543-476b-bd00-148dac1b108c-gcp-creds\") on node \"addons-762402\" DevicePath \"\""
	Nov 09 13:32:05 addons-762402 kubelet[1292]: I1109 13:32:05.196148    1292 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-3ddc37f0-8ef7-4208-be19-42db74055adc" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^7640e8f9-bd70-11f0-a45e-9a610ef250bb") on node "addons-762402"
	Nov 09 13:32:05 addons-762402 kubelet[1292]: I1109 13:32:05.292132    1292 reconciler_common.go:299] "Volume detached for volume \"pvc-3ddc37f0-8ef7-4208-be19-42db74055adc\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^7640e8f9-bd70-11f0-a45e-9a610ef250bb\") on node \"addons-762402\" DevicePath \"\""
	Nov 09 13:32:05 addons-762402 kubelet[1292]: I1109 13:32:05.436948    1292 scope.go:117] "RemoveContainer" containerID="c2a2d68e01ce77b15a75b8b710db6ed5a47c9406f8dae97cad666876ffe74a64"
	Nov 09 13:32:05 addons-762402 kubelet[1292]: I1109 13:32:05.446336    1292 scope.go:117] "RemoveContainer" containerID="c2a2d68e01ce77b15a75b8b710db6ed5a47c9406f8dae97cad666876ffe74a64"
	Nov 09 13:32:05 addons-762402 kubelet[1292]: E1109 13:32:05.446628    1292 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2a2d68e01ce77b15a75b8b710db6ed5a47c9406f8dae97cad666876ffe74a64\": container with ID starting with c2a2d68e01ce77b15a75b8b710db6ed5a47c9406f8dae97cad666876ffe74a64 not found: ID does not exist" containerID="c2a2d68e01ce77b15a75b8b710db6ed5a47c9406f8dae97cad666876ffe74a64"
	Nov 09 13:32:05 addons-762402 kubelet[1292]: I1109 13:32:05.446684    1292 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a2d68e01ce77b15a75b8b710db6ed5a47c9406f8dae97cad666876ffe74a64"} err="failed to get container status \"c2a2d68e01ce77b15a75b8b710db6ed5a47c9406f8dae97cad666876ffe74a64\": rpc error: code = NotFound desc = could not find container \"c2a2d68e01ce77b15a75b8b710db6ed5a47c9406f8dae97cad666876ffe74a64\": container with ID starting with c2a2d68e01ce77b15a75b8b710db6ed5a47c9406f8dae97cad666876ffe74a64 not found: ID does not exist"
	Nov 09 13:32:06 addons-762402 kubelet[1292]: I1109 13:32:06.830386    1292 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a22c28d-b543-476b-bd00-148dac1b108c" path="/var/lib/kubelet/pods/9a22c28d-b543-476b-bd00-148dac1b108c/volumes"
	Nov 09 13:33:11 addons-762402 kubelet[1292]: I1109 13:33:11.828574    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-8nlkf" secret="" err="secret \"gcp-auth\" not found"
	Nov 09 13:33:15 addons-762402 kubelet[1292]: I1109 13:33:15.828093    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-rrlcz" secret="" err="secret \"gcp-auth\" not found"
	Nov 09 13:33:19 addons-762402 kubelet[1292]: I1109 13:33:19.827774    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-z7stg" secret="" err="secret \"gcp-auth\" not found"
	Nov 09 13:33:34 addons-762402 kubelet[1292]: I1109 13:33:34.678508    1292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d44bf62d-b9a8-40b0-ab69-a238a91d3b31-gcp-creds\") pod \"hello-world-app-5d498dc89-7bqrw\" (UID: \"d44bf62d-b9a8-40b0-ab69-a238a91d3b31\") " pod="default/hello-world-app-5d498dc89-7bqrw"
	Nov 09 13:33:34 addons-762402 kubelet[1292]: I1109 13:33:34.678680    1292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scz86\" (UniqueName: \"kubernetes.io/projected/d44bf62d-b9a8-40b0-ab69-a238a91d3b31-kube-api-access-scz86\") pod \"hello-world-app-5d498dc89-7bqrw\" (UID: \"d44bf62d-b9a8-40b0-ab69-a238a91d3b31\") " pod="default/hello-world-app-5d498dc89-7bqrw"
	Nov 09 13:33:35 addons-762402 kubelet[1292]: I1109 13:33:35.754149    1292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-7bqrw" podStartSLOduration=0.969400698 podStartE2EDuration="1.754131407s" podCreationTimestamp="2025-11-09 13:33:34 +0000 UTC" firstStartedPulling="2025-11-09 13:33:34.861031769 +0000 UTC m=+246.110681444" lastFinishedPulling="2025-11-09 13:33:35.645762479 +0000 UTC m=+246.895412153" observedRunningTime="2025-11-09 13:33:35.75338295 +0000 UTC m=+247.003032643" watchObservedRunningTime="2025-11-09 13:33:35.754131407 +0000 UTC m=+247.003781101"
	
	
	==> storage-provisioner [befdac5dae601c24846eafd867c1ffdaf970739dfdb6544290f7d401fb479adb] <==
	W1109 13:33:11.089388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:13.092121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:13.096616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:15.098944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:15.103423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:17.106463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:17.110501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:19.113518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:19.116861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:21.119432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:21.124046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:23.126489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:23.130073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:25.132556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:25.136048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:27.138665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:27.141979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:29.144481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:29.148226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:31.150686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:31.154208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:33.156672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:33.160853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:35.164158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:33:35.170951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-762402 -n addons-762402
helpers_test.go:269: (dbg) Run:  kubectl --context addons-762402 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-l24wm ingress-nginx-admission-patch-f6fqd
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-762402 describe pod ingress-nginx-admission-create-l24wm ingress-nginx-admission-patch-f6fqd
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-762402 describe pod ingress-nginx-admission-create-l24wm ingress-nginx-admission-patch-f6fqd: exit status 1 (53.461072ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-l24wm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-f6fqd" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-762402 describe pod ingress-nginx-admission-create-l24wm ingress-nginx-admission-patch-f6fqd: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-762402 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (224.862846ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:33:36.871388   25235 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:33:36.871667   25235 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:33:36.871676   25235 out.go:374] Setting ErrFile to fd 2...
	I1109 13:33:36.871679   25235 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:33:36.871881   25235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:33:36.872096   25235 mustload.go:66] Loading cluster: addons-762402
	I1109 13:33:36.872385   25235 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:33:36.872396   25235 addons.go:607] checking whether the cluster is paused
	I1109 13:33:36.872472   25235 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:33:36.872482   25235 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:33:36.872845   25235 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:33:36.890021   25235 ssh_runner.go:195] Run: systemctl --version
	I1109 13:33:36.890072   25235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:33:36.906320   25235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:33:36.996511   25235 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:33:36.996572   25235 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:33:37.023950   25235 cri.go:89] found id: "d7738655acb84c4efbc4f35b8b5c93ff7d6577537b16dfabb7e9f5b6db09ef0d"
	I1109 13:33:37.023969   25235 cri.go:89] found id: "af27443b9f8960f215197598df9b91ffdc8840cc6699e13907ef2947a7f00d7f"
	I1109 13:33:37.023974   25235 cri.go:89] found id: "faa104590cba322dc0a5b9cf53d37627283cf55d3c3e01f2df4160549024ba44"
	I1109 13:33:37.023978   25235 cri.go:89] found id: "4e995ec9dee843135f1c0d0f6013b3b611a0904bb3f4e1d9e3a32292ddec2484"
	I1109 13:33:37.023982   25235 cri.go:89] found id: "b930aa6a1203014deddb934b10a09a6c0fe15230dc625851f1f8af6995f0c88a"
	I1109 13:33:37.023986   25235 cri.go:89] found id: "d67da63b5ee916610fc4244764266a69e975404d12653c2d5c53579883b8f1c1"
	I1109 13:33:37.023990   25235 cri.go:89] found id: "0aaed23bb5d29a8452cd11f247803cb291c459fb6a6e4e209ef99d928f631144"
	I1109 13:33:37.023994   25235 cri.go:89] found id: "08f89d0732d380e6bf1e4981ab1ec47b039f172745c2906fcf611652aaf44a3c"
	I1109 13:33:37.023997   25235 cri.go:89] found id: "2afddde1486e2dfe821e2e23866ced6e5b833b9bc83463adad53a55d1941cd8e"
	I1109 13:33:37.024004   25235 cri.go:89] found id: "c2e8f6876246e31a51938f36acf9e6262ba61dc5526f4f4a5e8789cd851d46cf"
	I1109 13:33:37.024016   25235 cri.go:89] found id: "c947eeaaf49bfe6ebb73d36bcfba2b027085814dd9abc647f0e0692d4b65939e"
	I1109 13:33:37.024024   25235 cri.go:89] found id: "3c9fabcff63aa51893e804edf796bca0360c29c53e840636c0db34b4037bde26"
	I1109 13:33:37.024029   25235 cri.go:89] found id: "057fd0d666013bd4f0e02f4749164afa0a5d0906ec2754e1278f051e29dbe2aa"
	I1109 13:33:37.024037   25235 cri.go:89] found id: "5e49dd8732922347f346f66fdc4fee803771be2e5ef0d4d345d966b70ab95d61"
	I1109 13:33:37.024045   25235 cri.go:89] found id: "403c426d7512024ca15621249cc490aed2d29fd9c26fd69e34ee8a4865f6ae84"
	I1109 13:33:37.024053   25235 cri.go:89] found id: "8c332e600a86f5db4cbaefa708b6b131d75d12b1625fc751633649df36046ad6"
	I1109 13:33:37.024060   25235 cri.go:89] found id: "e93137eb9f50682474df733b717841dc665886c5322e88ebbcc5f7daeff9c7f6"
	I1109 13:33:37.024065   25235 cri.go:89] found id: "befdac5dae601c24846eafd867c1ffdaf970739dfdb6544290f7d401fb479adb"
	I1109 13:33:37.024069   25235 cri.go:89] found id: "62effce4d440574a561ee8bb4713a143cff8c4dd5102c8837cbcb323a55c6f8e"
	I1109 13:33:37.024072   25235 cri.go:89] found id: "79692b5ce1377fdc3438f677558138b88bf7d705bda4ddca28079953b176faf6"
	I1109 13:33:37.024077   25235 cri.go:89] found id: "5af868c65929f995aa32fd9d978e8b01f76671c795d79d2fdd5e8b5636497bb7"
	I1109 13:33:37.024085   25235 cri.go:89] found id: "5bb7efe058cec23686f0732d66f5c759521ce0362ae0dac95c722fbe50e4ea21"
	I1109 13:33:37.024091   25235 cri.go:89] found id: "861090a7ec88133657d8ba74914277d902ce94237d6ff1f5d419ebee604b79da"
	I1109 13:33:37.024098   25235 cri.go:89] found id: "790087032ffe1bf767edeef5915a65edb0e4e1d93fd292a3746fd7e1aeb43138"
	I1109 13:33:37.024102   25235 cri.go:89] found id: "09ed3ea08406406e4a9fa9f5b1bf6bfff6a0edd9133d05145b78a59503d5f47b"
	I1109 13:33:37.024107   25235 cri.go:89] found id: ""
	I1109 13:33:37.024145   25235 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:33:37.036880   25235 out.go:203] 
	W1109 13:33:37.038017   25235 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:33:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:33:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:33:37.038032   25235 out.go:285] * 
	* 
	W1109 13:33:37.041029   25235 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:33:37.042080   25235 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-762402 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-762402 addons disable ingress --alsologtostderr -v=1: exit status 11 (226.871518ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:33:37.098124   25299 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:33:37.098421   25299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:33:37.098432   25299 out.go:374] Setting ErrFile to fd 2...
	I1109 13:33:37.098437   25299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:33:37.098707   25299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:33:37.099012   25299 mustload.go:66] Loading cluster: addons-762402
	I1109 13:33:37.099352   25299 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:33:37.099370   25299 addons.go:607] checking whether the cluster is paused
	I1109 13:33:37.099476   25299 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:33:37.099492   25299 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:33:37.099897   25299 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:33:37.117243   25299 ssh_runner.go:195] Run: systemctl --version
	I1109 13:33:37.117291   25299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:33:37.133598   25299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:33:37.225364   25299 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:33:37.225423   25299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:33:37.251601   25299 cri.go:89] found id: "d7738655acb84c4efbc4f35b8b5c93ff7d6577537b16dfabb7e9f5b6db09ef0d"
	I1109 13:33:37.251628   25299 cri.go:89] found id: "af27443b9f8960f215197598df9b91ffdc8840cc6699e13907ef2947a7f00d7f"
	I1109 13:33:37.251633   25299 cri.go:89] found id: "faa104590cba322dc0a5b9cf53d37627283cf55d3c3e01f2df4160549024ba44"
	I1109 13:33:37.251650   25299 cri.go:89] found id: "4e995ec9dee843135f1c0d0f6013b3b611a0904bb3f4e1d9e3a32292ddec2484"
	I1109 13:33:37.251656   25299 cri.go:89] found id: "b930aa6a1203014deddb934b10a09a6c0fe15230dc625851f1f8af6995f0c88a"
	I1109 13:33:37.251662   25299 cri.go:89] found id: "d67da63b5ee916610fc4244764266a69e975404d12653c2d5c53579883b8f1c1"
	I1109 13:33:37.251666   25299 cri.go:89] found id: "0aaed23bb5d29a8452cd11f247803cb291c459fb6a6e4e209ef99d928f631144"
	I1109 13:33:37.251671   25299 cri.go:89] found id: "08f89d0732d380e6bf1e4981ab1ec47b039f172745c2906fcf611652aaf44a3c"
	I1109 13:33:37.251675   25299 cri.go:89] found id: "2afddde1486e2dfe821e2e23866ced6e5b833b9bc83463adad53a55d1941cd8e"
	I1109 13:33:37.251687   25299 cri.go:89] found id: "c2e8f6876246e31a51938f36acf9e6262ba61dc5526f4f4a5e8789cd851d46cf"
	I1109 13:33:37.251694   25299 cri.go:89] found id: "c947eeaaf49bfe6ebb73d36bcfba2b027085814dd9abc647f0e0692d4b65939e"
	I1109 13:33:37.251697   25299 cri.go:89] found id: "3c9fabcff63aa51893e804edf796bca0360c29c53e840636c0db34b4037bde26"
	I1109 13:33:37.251702   25299 cri.go:89] found id: "057fd0d666013bd4f0e02f4749164afa0a5d0906ec2754e1278f051e29dbe2aa"
	I1109 13:33:37.251706   25299 cri.go:89] found id: "5e49dd8732922347f346f66fdc4fee803771be2e5ef0d4d345d966b70ab95d61"
	I1109 13:33:37.251711   25299 cri.go:89] found id: "403c426d7512024ca15621249cc490aed2d29fd9c26fd69e34ee8a4865f6ae84"
	I1109 13:33:37.251718   25299 cri.go:89] found id: "8c332e600a86f5db4cbaefa708b6b131d75d12b1625fc751633649df36046ad6"
	I1109 13:33:37.251721   25299 cri.go:89] found id: "e93137eb9f50682474df733b717841dc665886c5322e88ebbcc5f7daeff9c7f6"
	I1109 13:33:37.251724   25299 cri.go:89] found id: "befdac5dae601c24846eafd867c1ffdaf970739dfdb6544290f7d401fb479adb"
	I1109 13:33:37.251726   25299 cri.go:89] found id: "62effce4d440574a561ee8bb4713a143cff8c4dd5102c8837cbcb323a55c6f8e"
	I1109 13:33:37.251729   25299 cri.go:89] found id: "79692b5ce1377fdc3438f677558138b88bf7d705bda4ddca28079953b176faf6"
	I1109 13:33:37.251731   25299 cri.go:89] found id: "5af868c65929f995aa32fd9d978e8b01f76671c795d79d2fdd5e8b5636497bb7"
	I1109 13:33:37.251735   25299 cri.go:89] found id: "5bb7efe058cec23686f0732d66f5c759521ce0362ae0dac95c722fbe50e4ea21"
	I1109 13:33:37.251742   25299 cri.go:89] found id: "861090a7ec88133657d8ba74914277d902ce94237d6ff1f5d419ebee604b79da"
	I1109 13:33:37.251746   25299 cri.go:89] found id: "790087032ffe1bf767edeef5915a65edb0e4e1d93fd292a3746fd7e1aeb43138"
	I1109 13:33:37.251754   25299 cri.go:89] found id: "09ed3ea08406406e4a9fa9f5b1bf6bfff6a0edd9133d05145b78a59503d5f47b"
	I1109 13:33:37.251758   25299 cri.go:89] found id: ""
	I1109 13:33:37.251801   25299 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:33:37.263896   25299 out.go:203] 
	W1109 13:33:37.265001   25299 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:33:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:33:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:33:37.265016   25299 out.go:285] * 
	* 
	W1109 13:33:37.267939   25299 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:33:37.269088   25299 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-762402 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (148.93s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.23s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-d5mhg" [9d47ecae-35e6-4b53-b51a-9a20fd5fa555] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00333445s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-762402 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (229.671703ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:31:17.513519   21650 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:31:17.513690   21650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:17.513700   21650 out.go:374] Setting ErrFile to fd 2...
	I1109 13:31:17.513704   21650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:17.513933   21650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:31:17.514165   21650 mustload.go:66] Loading cluster: addons-762402
	I1109 13:31:17.514457   21650 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:17.514469   21650 addons.go:607] checking whether the cluster is paused
	I1109 13:31:17.514545   21650 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:17.514555   21650 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:31:17.514950   21650 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:31:17.532431   21650 ssh_runner.go:195] Run: systemctl --version
	I1109 13:31:17.532480   21650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:31:17.548945   21650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:31:17.639763   21650 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:31:17.639837   21650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:31:17.668416   21650 cri.go:89] found id: "af27443b9f8960f215197598df9b91ffdc8840cc6699e13907ef2947a7f00d7f"
	I1109 13:31:17.668436   21650 cri.go:89] found id: "faa104590cba322dc0a5b9cf53d37627283cf55d3c3e01f2df4160549024ba44"
	I1109 13:31:17.668442   21650 cri.go:89] found id: "4e995ec9dee843135f1c0d0f6013b3b611a0904bb3f4e1d9e3a32292ddec2484"
	I1109 13:31:17.668446   21650 cri.go:89] found id: "b930aa6a1203014deddb934b10a09a6c0fe15230dc625851f1f8af6995f0c88a"
	I1109 13:31:17.668450   21650 cri.go:89] found id: "d67da63b5ee916610fc4244764266a69e975404d12653c2d5c53579883b8f1c1"
	I1109 13:31:17.668454   21650 cri.go:89] found id: "0aaed23bb5d29a8452cd11f247803cb291c459fb6a6e4e209ef99d928f631144"
	I1109 13:31:17.668458   21650 cri.go:89] found id: "08f89d0732d380e6bf1e4981ab1ec47b039f172745c2906fcf611652aaf44a3c"
	I1109 13:31:17.668462   21650 cri.go:89] found id: "2afddde1486e2dfe821e2e23866ced6e5b833b9bc83463adad53a55d1941cd8e"
	I1109 13:31:17.668466   21650 cri.go:89] found id: "c2e8f6876246e31a51938f36acf9e6262ba61dc5526f4f4a5e8789cd851d46cf"
	I1109 13:31:17.668474   21650 cri.go:89] found id: "c947eeaaf49bfe6ebb73d36bcfba2b027085814dd9abc647f0e0692d4b65939e"
	I1109 13:31:17.668479   21650 cri.go:89] found id: "3c9fabcff63aa51893e804edf796bca0360c29c53e840636c0db34b4037bde26"
	I1109 13:31:17.668489   21650 cri.go:89] found id: "057fd0d666013bd4f0e02f4749164afa0a5d0906ec2754e1278f051e29dbe2aa"
	I1109 13:31:17.668494   21650 cri.go:89] found id: "5e49dd8732922347f346f66fdc4fee803771be2e5ef0d4d345d966b70ab95d61"
	I1109 13:31:17.668499   21650 cri.go:89] found id: "403c426d7512024ca15621249cc490aed2d29fd9c26fd69e34ee8a4865f6ae84"
	I1109 13:31:17.668505   21650 cri.go:89] found id: "8c332e600a86f5db4cbaefa708b6b131d75d12b1625fc751633649df36046ad6"
	I1109 13:31:17.668513   21650 cri.go:89] found id: "e93137eb9f50682474df733b717841dc665886c5322e88ebbcc5f7daeff9c7f6"
	I1109 13:31:17.668520   21650 cri.go:89] found id: "befdac5dae601c24846eafd867c1ffdaf970739dfdb6544290f7d401fb479adb"
	I1109 13:31:17.668528   21650 cri.go:89] found id: "62effce4d440574a561ee8bb4713a143cff8c4dd5102c8837cbcb323a55c6f8e"
	I1109 13:31:17.668534   21650 cri.go:89] found id: "79692b5ce1377fdc3438f677558138b88bf7d705bda4ddca28079953b176faf6"
	I1109 13:31:17.668537   21650 cri.go:89] found id: "5af868c65929f995aa32fd9d978e8b01f76671c795d79d2fdd5e8b5636497bb7"
	I1109 13:31:17.668549   21650 cri.go:89] found id: "5bb7efe058cec23686f0732d66f5c759521ce0362ae0dac95c722fbe50e4ea21"
	I1109 13:31:17.668552   21650 cri.go:89] found id: "861090a7ec88133657d8ba74914277d902ce94237d6ff1f5d419ebee604b79da"
	I1109 13:31:17.668556   21650 cri.go:89] found id: "790087032ffe1bf767edeef5915a65edb0e4e1d93fd292a3746fd7e1aeb43138"
	I1109 13:31:17.668561   21650 cri.go:89] found id: "09ed3ea08406406e4a9fa9f5b1bf6bfff6a0edd9133d05145b78a59503d5f47b"
	I1109 13:31:17.668565   21650 cri.go:89] found id: ""
	I1109 13:31:17.668607   21650 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:31:17.683272   21650 out.go:203] 
	W1109 13:31:17.684359   21650 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:31:17.684375   21650 out.go:285] * 
	* 
	W1109 13:31:17.687307   21650 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:31:17.688478   21650 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-762402 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 2.738347ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-992g6" [bf11616a-d50d-49dd-b4f5-deed14c6349b] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.001988109s
addons_test.go:463: (dbg) Run:  kubectl --context addons-762402 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-762402 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (232.244635ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:31:08.162141   19908 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:31:08.162293   19908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:08.162303   19908 out.go:374] Setting ErrFile to fd 2...
	I1109 13:31:08.162307   19908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:08.162501   19908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:31:08.162790   19908 mustload.go:66] Loading cluster: addons-762402
	I1109 13:31:08.163186   19908 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:08.163202   19908 addons.go:607] checking whether the cluster is paused
	I1109 13:31:08.163300   19908 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:08.163317   19908 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:31:08.163715   19908 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:31:08.180819   19908 ssh_runner.go:195] Run: systemctl --version
	I1109 13:31:08.180881   19908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:31:08.198143   19908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:31:08.289082   19908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:31:08.289156   19908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:31:08.316085   19908 cri.go:89] found id: "af27443b9f8960f215197598df9b91ffdc8840cc6699e13907ef2947a7f00d7f"
	I1109 13:31:08.316111   19908 cri.go:89] found id: "faa104590cba322dc0a5b9cf53d37627283cf55d3c3e01f2df4160549024ba44"
	I1109 13:31:08.316117   19908 cri.go:89] found id: "4e995ec9dee843135f1c0d0f6013b3b611a0904bb3f4e1d9e3a32292ddec2484"
	I1109 13:31:08.316121   19908 cri.go:89] found id: "b930aa6a1203014deddb934b10a09a6c0fe15230dc625851f1f8af6995f0c88a"
	I1109 13:31:08.316126   19908 cri.go:89] found id: "d67da63b5ee916610fc4244764266a69e975404d12653c2d5c53579883b8f1c1"
	I1109 13:31:08.316132   19908 cri.go:89] found id: "0aaed23bb5d29a8452cd11f247803cb291c459fb6a6e4e209ef99d928f631144"
	I1109 13:31:08.316136   19908 cri.go:89] found id: "08f89d0732d380e6bf1e4981ab1ec47b039f172745c2906fcf611652aaf44a3c"
	I1109 13:31:08.316140   19908 cri.go:89] found id: "2afddde1486e2dfe821e2e23866ced6e5b833b9bc83463adad53a55d1941cd8e"
	I1109 13:31:08.316144   19908 cri.go:89] found id: "c2e8f6876246e31a51938f36acf9e6262ba61dc5526f4f4a5e8789cd851d46cf"
	I1109 13:31:08.316160   19908 cri.go:89] found id: "c947eeaaf49bfe6ebb73d36bcfba2b027085814dd9abc647f0e0692d4b65939e"
	I1109 13:31:08.316168   19908 cri.go:89] found id: "3c9fabcff63aa51893e804edf796bca0360c29c53e840636c0db34b4037bde26"
	I1109 13:31:08.316173   19908 cri.go:89] found id: "057fd0d666013bd4f0e02f4749164afa0a5d0906ec2754e1278f051e29dbe2aa"
	I1109 13:31:08.316178   19908 cri.go:89] found id: "5e49dd8732922347f346f66fdc4fee803771be2e5ef0d4d345d966b70ab95d61"
	I1109 13:31:08.316182   19908 cri.go:89] found id: "403c426d7512024ca15621249cc490aed2d29fd9c26fd69e34ee8a4865f6ae84"
	I1109 13:31:08.316187   19908 cri.go:89] found id: "8c332e600a86f5db4cbaefa708b6b131d75d12b1625fc751633649df36046ad6"
	I1109 13:31:08.316206   19908 cri.go:89] found id: "e93137eb9f50682474df733b717841dc665886c5322e88ebbcc5f7daeff9c7f6"
	I1109 13:31:08.316216   19908 cri.go:89] found id: "befdac5dae601c24846eafd867c1ffdaf970739dfdb6544290f7d401fb479adb"
	I1109 13:31:08.316228   19908 cri.go:89] found id: "62effce4d440574a561ee8bb4713a143cff8c4dd5102c8837cbcb323a55c6f8e"
	I1109 13:31:08.316231   19908 cri.go:89] found id: "79692b5ce1377fdc3438f677558138b88bf7d705bda4ddca28079953b176faf6"
	I1109 13:31:08.316235   19908 cri.go:89] found id: "5af868c65929f995aa32fd9d978e8b01f76671c795d79d2fdd5e8b5636497bb7"
	I1109 13:31:08.316241   19908 cri.go:89] found id: "5bb7efe058cec23686f0732d66f5c759521ce0362ae0dac95c722fbe50e4ea21"
	I1109 13:31:08.316245   19908 cri.go:89] found id: "861090a7ec88133657d8ba74914277d902ce94237d6ff1f5d419ebee604b79da"
	I1109 13:31:08.316250   19908 cri.go:89] found id: "790087032ffe1bf767edeef5915a65edb0e4e1d93fd292a3746fd7e1aeb43138"
	I1109 13:31:08.316256   19908 cri.go:89] found id: "09ed3ea08406406e4a9fa9f5b1bf6bfff6a0edd9133d05145b78a59503d5f47b"
	I1109 13:31:08.316261   19908 cri.go:89] found id: ""
	I1109 13:31:08.316314   19908 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:31:08.328808   19908 out.go:203] 
	W1109 13:31:08.329899   19908 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:31:08.329919   19908 out.go:285] * 
	* 
	W1109 13:31:08.332803   19908 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:31:08.333908   19908 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-762402 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1109 13:31:13.520942    9365 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1109 13:31:13.524118    9365 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1109 13:31:13.524143    9365 kapi.go:107] duration metric: took 3.219396ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.229621ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-762402 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-762402 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [969387ab-a4f2-49c4-8c2e-1e1587884114] Pending
helpers_test.go:352: "task-pv-pod" [969387ab-a4f2-49c4-8c2e-1e1587884114] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [969387ab-a4f2-49c4-8c2e-1e1587884114] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.002861453s
addons_test.go:572: (dbg) Run:  kubectl --context addons-762402 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-762402 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-762402 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-762402 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-762402 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-762402 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-762402 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [9a22c28d-b543-476b-bd00-148dac1b108c] Pending
helpers_test.go:352: "task-pv-pod-restore" [9a22c28d-b543-476b-bd00-148dac1b108c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [9a22c28d-b543-476b-bd00-148dac1b108c] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00285337s
addons_test.go:614: (dbg) Run:  kubectl --context addons-762402 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-762402 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-762402 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-762402 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (226.425258ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:32:05.817365   23200 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:32:05.817630   23200 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:05.817653   23200 out.go:374] Setting ErrFile to fd 2...
	I1109 13:32:05.817660   23200 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:05.817838   23200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:32:05.818108   23200 mustload.go:66] Loading cluster: addons-762402
	I1109 13:32:05.818432   23200 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:05.818450   23200 addons.go:607] checking whether the cluster is paused
	I1109 13:32:05.818552   23200 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:05.818567   23200 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:32:05.818926   23200 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:32:05.836801   23200 ssh_runner.go:195] Run: systemctl --version
	I1109 13:32:05.836851   23200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:32:05.853261   23200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:32:05.943628   23200 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:32:05.943693   23200 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:32:05.970957   23200 cri.go:89] found id: "d7738655acb84c4efbc4f35b8b5c93ff7d6577537b16dfabb7e9f5b6db09ef0d"
	I1109 13:32:05.970983   23200 cri.go:89] found id: "af27443b9f8960f215197598df9b91ffdc8840cc6699e13907ef2947a7f00d7f"
	I1109 13:32:05.970987   23200 cri.go:89] found id: "faa104590cba322dc0a5b9cf53d37627283cf55d3c3e01f2df4160549024ba44"
	I1109 13:32:05.970991   23200 cri.go:89] found id: "4e995ec9dee843135f1c0d0f6013b3b611a0904bb3f4e1d9e3a32292ddec2484"
	I1109 13:32:05.970994   23200 cri.go:89] found id: "b930aa6a1203014deddb934b10a09a6c0fe15230dc625851f1f8af6995f0c88a"
	I1109 13:32:05.970998   23200 cri.go:89] found id: "d67da63b5ee916610fc4244764266a69e975404d12653c2d5c53579883b8f1c1"
	I1109 13:32:05.971000   23200 cri.go:89] found id: "0aaed23bb5d29a8452cd11f247803cb291c459fb6a6e4e209ef99d928f631144"
	I1109 13:32:05.971003   23200 cri.go:89] found id: "08f89d0732d380e6bf1e4981ab1ec47b039f172745c2906fcf611652aaf44a3c"
	I1109 13:32:05.971005   23200 cri.go:89] found id: "2afddde1486e2dfe821e2e23866ced6e5b833b9bc83463adad53a55d1941cd8e"
	I1109 13:32:05.971018   23200 cri.go:89] found id: "c2e8f6876246e31a51938f36acf9e6262ba61dc5526f4f4a5e8789cd851d46cf"
	I1109 13:32:05.971020   23200 cri.go:89] found id: "c947eeaaf49bfe6ebb73d36bcfba2b027085814dd9abc647f0e0692d4b65939e"
	I1109 13:32:05.971023   23200 cri.go:89] found id: "3c9fabcff63aa51893e804edf796bca0360c29c53e840636c0db34b4037bde26"
	I1109 13:32:05.971025   23200 cri.go:89] found id: "057fd0d666013bd4f0e02f4749164afa0a5d0906ec2754e1278f051e29dbe2aa"
	I1109 13:32:05.971028   23200 cri.go:89] found id: "5e49dd8732922347f346f66fdc4fee803771be2e5ef0d4d345d966b70ab95d61"
	I1109 13:32:05.971030   23200 cri.go:89] found id: "403c426d7512024ca15621249cc490aed2d29fd9c26fd69e34ee8a4865f6ae84"
	I1109 13:32:05.971036   23200 cri.go:89] found id: "8c332e600a86f5db4cbaefa708b6b131d75d12b1625fc751633649df36046ad6"
	I1109 13:32:05.971039   23200 cri.go:89] found id: "e93137eb9f50682474df733b717841dc665886c5322e88ebbcc5f7daeff9c7f6"
	I1109 13:32:05.971043   23200 cri.go:89] found id: "befdac5dae601c24846eafd867c1ffdaf970739dfdb6544290f7d401fb479adb"
	I1109 13:32:05.971045   23200 cri.go:89] found id: "62effce4d440574a561ee8bb4713a143cff8c4dd5102c8837cbcb323a55c6f8e"
	I1109 13:32:05.971048   23200 cri.go:89] found id: "79692b5ce1377fdc3438f677558138b88bf7d705bda4ddca28079953b176faf6"
	I1109 13:32:05.971050   23200 cri.go:89] found id: "5af868c65929f995aa32fd9d978e8b01f76671c795d79d2fdd5e8b5636497bb7"
	I1109 13:32:05.971052   23200 cri.go:89] found id: "5bb7efe058cec23686f0732d66f5c759521ce0362ae0dac95c722fbe50e4ea21"
	I1109 13:32:05.971054   23200 cri.go:89] found id: "861090a7ec88133657d8ba74914277d902ce94237d6ff1f5d419ebee604b79da"
	I1109 13:32:05.971057   23200 cri.go:89] found id: "790087032ffe1bf767edeef5915a65edb0e4e1d93fd292a3746fd7e1aeb43138"
	I1109 13:32:05.971059   23200 cri.go:89] found id: "09ed3ea08406406e4a9fa9f5b1bf6bfff6a0edd9133d05145b78a59503d5f47b"
	I1109 13:32:05.971062   23200 cri.go:89] found id: ""
	I1109 13:32:05.971105   23200 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:32:05.983760   23200 out.go:203] 
	W1109 13:32:05.985046   23200 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:32:05.985058   23200 out.go:285] * 
	* 
	W1109 13:32:05.988006   23200 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:32:05.989162   23200 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-762402 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-762402 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (232.836524ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:32:06.047143   23277 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:32:06.047295   23277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:06.047305   23277 out.go:374] Setting ErrFile to fd 2...
	I1109 13:32:06.047309   23277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:06.047494   23277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:32:06.047734   23277 mustload.go:66] Loading cluster: addons-762402
	I1109 13:32:06.048064   23277 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:06.048078   23277 addons.go:607] checking whether the cluster is paused
	I1109 13:32:06.048156   23277 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:06.048167   23277 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:32:06.048558   23277 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:32:06.065785   23277 ssh_runner.go:195] Run: systemctl --version
	I1109 13:32:06.065850   23277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:32:06.082334   23277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:32:06.174157   23277 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:32:06.174231   23277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:32:06.203942   23277 cri.go:89] found id: "d7738655acb84c4efbc4f35b8b5c93ff7d6577537b16dfabb7e9f5b6db09ef0d"
	I1109 13:32:06.203964   23277 cri.go:89] found id: "af27443b9f8960f215197598df9b91ffdc8840cc6699e13907ef2947a7f00d7f"
	I1109 13:32:06.203969   23277 cri.go:89] found id: "faa104590cba322dc0a5b9cf53d37627283cf55d3c3e01f2df4160549024ba44"
	I1109 13:32:06.203989   23277 cri.go:89] found id: "4e995ec9dee843135f1c0d0f6013b3b611a0904bb3f4e1d9e3a32292ddec2484"
	I1109 13:32:06.203993   23277 cri.go:89] found id: "b930aa6a1203014deddb934b10a09a6c0fe15230dc625851f1f8af6995f0c88a"
	I1109 13:32:06.203998   23277 cri.go:89] found id: "d67da63b5ee916610fc4244764266a69e975404d12653c2d5c53579883b8f1c1"
	I1109 13:32:06.204002   23277 cri.go:89] found id: "0aaed23bb5d29a8452cd11f247803cb291c459fb6a6e4e209ef99d928f631144"
	I1109 13:32:06.204010   23277 cri.go:89] found id: "08f89d0732d380e6bf1e4981ab1ec47b039f172745c2906fcf611652aaf44a3c"
	I1109 13:32:06.204015   23277 cri.go:89] found id: "2afddde1486e2dfe821e2e23866ced6e5b833b9bc83463adad53a55d1941cd8e"
	I1109 13:32:06.204025   23277 cri.go:89] found id: "c2e8f6876246e31a51938f36acf9e6262ba61dc5526f4f4a5e8789cd851d46cf"
	I1109 13:32:06.204032   23277 cri.go:89] found id: "c947eeaaf49bfe6ebb73d36bcfba2b027085814dd9abc647f0e0692d4b65939e"
	I1109 13:32:06.204037   23277 cri.go:89] found id: "3c9fabcff63aa51893e804edf796bca0360c29c53e840636c0db34b4037bde26"
	I1109 13:32:06.204044   23277 cri.go:89] found id: "057fd0d666013bd4f0e02f4749164afa0a5d0906ec2754e1278f051e29dbe2aa"
	I1109 13:32:06.204048   23277 cri.go:89] found id: "5e49dd8732922347f346f66fdc4fee803771be2e5ef0d4d345d966b70ab95d61"
	I1109 13:32:06.204056   23277 cri.go:89] found id: "403c426d7512024ca15621249cc490aed2d29fd9c26fd69e34ee8a4865f6ae84"
	I1109 13:32:06.204065   23277 cri.go:89] found id: "8c332e600a86f5db4cbaefa708b6b131d75d12b1625fc751633649df36046ad6"
	I1109 13:32:06.204073   23277 cri.go:89] found id: "e93137eb9f50682474df733b717841dc665886c5322e88ebbcc5f7daeff9c7f6"
	I1109 13:32:06.204078   23277 cri.go:89] found id: "befdac5dae601c24846eafd867c1ffdaf970739dfdb6544290f7d401fb479adb"
	I1109 13:32:06.204082   23277 cri.go:89] found id: "62effce4d440574a561ee8bb4713a143cff8c4dd5102c8837cbcb323a55c6f8e"
	I1109 13:32:06.204086   23277 cri.go:89] found id: "79692b5ce1377fdc3438f677558138b88bf7d705bda4ddca28079953b176faf6"
	I1109 13:32:06.204095   23277 cri.go:89] found id: "5af868c65929f995aa32fd9d978e8b01f76671c795d79d2fdd5e8b5636497bb7"
	I1109 13:32:06.204102   23277 cri.go:89] found id: "5bb7efe058cec23686f0732d66f5c759521ce0362ae0dac95c722fbe50e4ea21"
	I1109 13:32:06.204107   23277 cri.go:89] found id: "861090a7ec88133657d8ba74914277d902ce94237d6ff1f5d419ebee604b79da"
	I1109 13:32:06.204114   23277 cri.go:89] found id: "790087032ffe1bf767edeef5915a65edb0e4e1d93fd292a3746fd7e1aeb43138"
	I1109 13:32:06.204118   23277 cri.go:89] found id: "09ed3ea08406406e4a9fa9f5b1bf6bfff6a0edd9133d05145b78a59503d5f47b"
	I1109 13:32:06.204125   23277 cri.go:89] found id: ""
	I1109 13:32:06.204170   23277 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:32:06.217608   23277 out.go:203] 
	W1109 13:32:06.218715   23277 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:32:06.218731   23277 out.go:285] * 
	* 
	W1109 13:32:06.221700   23277 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:32:06.222807   23277 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-762402 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (52.71s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-762402 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-762402 --alsologtostderr -v=1: exit status 11 (227.007629ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:31:02.094670   18939 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:31:02.094841   18939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:02.094852   18939 out.go:374] Setting ErrFile to fd 2...
	I1109 13:31:02.094858   18939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:02.095065   18939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:31:02.095332   18939 mustload.go:66] Loading cluster: addons-762402
	I1109 13:31:02.095691   18939 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:02.095708   18939 addons.go:607] checking whether the cluster is paused
	I1109 13:31:02.095810   18939 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:02.095836   18939 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:31:02.096189   18939 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:31:02.113635   18939 ssh_runner.go:195] Run: systemctl --version
	I1109 13:31:02.113697   18939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:31:02.130180   18939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:31:02.220343   18939 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:31:02.220392   18939 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:31:02.247130   18939 cri.go:89] found id: "af27443b9f8960f215197598df9b91ffdc8840cc6699e13907ef2947a7f00d7f"
	I1109 13:31:02.247149   18939 cri.go:89] found id: "faa104590cba322dc0a5b9cf53d37627283cf55d3c3e01f2df4160549024ba44"
	I1109 13:31:02.247154   18939 cri.go:89] found id: "4e995ec9dee843135f1c0d0f6013b3b611a0904bb3f4e1d9e3a32292ddec2484"
	I1109 13:31:02.247159   18939 cri.go:89] found id: "b930aa6a1203014deddb934b10a09a6c0fe15230dc625851f1f8af6995f0c88a"
	I1109 13:31:02.247163   18939 cri.go:89] found id: "d67da63b5ee916610fc4244764266a69e975404d12653c2d5c53579883b8f1c1"
	I1109 13:31:02.247168   18939 cri.go:89] found id: "0aaed23bb5d29a8452cd11f247803cb291c459fb6a6e4e209ef99d928f631144"
	I1109 13:31:02.247172   18939 cri.go:89] found id: "08f89d0732d380e6bf1e4981ab1ec47b039f172745c2906fcf611652aaf44a3c"
	I1109 13:31:02.247176   18939 cri.go:89] found id: "2afddde1486e2dfe821e2e23866ced6e5b833b9bc83463adad53a55d1941cd8e"
	I1109 13:31:02.247182   18939 cri.go:89] found id: "c2e8f6876246e31a51938f36acf9e6262ba61dc5526f4f4a5e8789cd851d46cf"
	I1109 13:31:02.247189   18939 cri.go:89] found id: "c947eeaaf49bfe6ebb73d36bcfba2b027085814dd9abc647f0e0692d4b65939e"
	I1109 13:31:02.247199   18939 cri.go:89] found id: "3c9fabcff63aa51893e804edf796bca0360c29c53e840636c0db34b4037bde26"
	I1109 13:31:02.247203   18939 cri.go:89] found id: "057fd0d666013bd4f0e02f4749164afa0a5d0906ec2754e1278f051e29dbe2aa"
	I1109 13:31:02.247212   18939 cri.go:89] found id: "5e49dd8732922347f346f66fdc4fee803771be2e5ef0d4d345d966b70ab95d61"
	I1109 13:31:02.247216   18939 cri.go:89] found id: "403c426d7512024ca15621249cc490aed2d29fd9c26fd69e34ee8a4865f6ae84"
	I1109 13:31:02.247220   18939 cri.go:89] found id: "8c332e600a86f5db4cbaefa708b6b131d75d12b1625fc751633649df36046ad6"
	I1109 13:31:02.247237   18939 cri.go:89] found id: "e93137eb9f50682474df733b717841dc665886c5322e88ebbcc5f7daeff9c7f6"
	I1109 13:31:02.247246   18939 cri.go:89] found id: "befdac5dae601c24846eafd867c1ffdaf970739dfdb6544290f7d401fb479adb"
	I1109 13:31:02.247249   18939 cri.go:89] found id: "62effce4d440574a561ee8bb4713a143cff8c4dd5102c8837cbcb323a55c6f8e"
	I1109 13:31:02.247252   18939 cri.go:89] found id: "79692b5ce1377fdc3438f677558138b88bf7d705bda4ddca28079953b176faf6"
	I1109 13:31:02.247254   18939 cri.go:89] found id: "5af868c65929f995aa32fd9d978e8b01f76671c795d79d2fdd5e8b5636497bb7"
	I1109 13:31:02.247261   18939 cri.go:89] found id: "5bb7efe058cec23686f0732d66f5c759521ce0362ae0dac95c722fbe50e4ea21"
	I1109 13:31:02.247264   18939 cri.go:89] found id: "861090a7ec88133657d8ba74914277d902ce94237d6ff1f5d419ebee604b79da"
	I1109 13:31:02.247266   18939 cri.go:89] found id: "790087032ffe1bf767edeef5915a65edb0e4e1d93fd292a3746fd7e1aeb43138"
	I1109 13:31:02.247268   18939 cri.go:89] found id: "09ed3ea08406406e4a9fa9f5b1bf6bfff6a0edd9133d05145b78a59503d5f47b"
	I1109 13:31:02.247270   18939 cri.go:89] found id: ""
	I1109 13:31:02.247299   18939 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:31:02.259883   18939 out.go:203] 
	W1109 13:31:02.260976   18939 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:31:02.260991   18939 out.go:285] * 
	* 
	W1109 13:31:02.263874   18939 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:31:02.264911   18939 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-762402 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-762402
helpers_test.go:243: (dbg) docker inspect addons-762402:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "821c1afb04ad22e843320e6377deb93527e6ad7c99baba694ddb4ac0ff97e5b1",
	        "Created": "2025-11-09T13:29:15.097575436Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11340,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T13:29:15.128393835Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/821c1afb04ad22e843320e6377deb93527e6ad7c99baba694ddb4ac0ff97e5b1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/821c1afb04ad22e843320e6377deb93527e6ad7c99baba694ddb4ac0ff97e5b1/hostname",
	        "HostsPath": "/var/lib/docker/containers/821c1afb04ad22e843320e6377deb93527e6ad7c99baba694ddb4ac0ff97e5b1/hosts",
	        "LogPath": "/var/lib/docker/containers/821c1afb04ad22e843320e6377deb93527e6ad7c99baba694ddb4ac0ff97e5b1/821c1afb04ad22e843320e6377deb93527e6ad7c99baba694ddb4ac0ff97e5b1-json.log",
	        "Name": "/addons-762402",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-762402:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-762402",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "821c1afb04ad22e843320e6377deb93527e6ad7c99baba694ddb4ac0ff97e5b1",
	                "LowerDir": "/var/lib/docker/overlay2/e6a2b324288c15b6de54fa215af7bfd988b91baf8b258f3ffdddbbe00df26150-init/diff:/var/lib/docker/overlay2/d0c6d4b4ffbfb6af64ec3408030fc54111fe30d500088a5ae947d598cfc72f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e6a2b324288c15b6de54fa215af7bfd988b91baf8b258f3ffdddbbe00df26150/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e6a2b324288c15b6de54fa215af7bfd988b91baf8b258f3ffdddbbe00df26150/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e6a2b324288c15b6de54fa215af7bfd988b91baf8b258f3ffdddbbe00df26150/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-762402",
	                "Source": "/var/lib/docker/volumes/addons-762402/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-762402",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-762402",
	                "name.minikube.sigs.k8s.io": "addons-762402",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9c73163baf89e0a44d9d35f63c5bbf73045eadc00a8cc4feef704f6b1ccd5cd1",
	            "SandboxKey": "/var/run/docker/netns/9c73163baf89",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-762402": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:71:c5:60:f8:fe",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d89a87f039a77445f033266b233e8ec4079eeadc9cdaa00ebb680ec78f070cc4",
	                    "EndpointID": "34b003a2e84c4cda2fceb53081a266378dc792aa714e2d36f367a8f413ded0a4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-762402",
	                        "821c1afb04ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-762402 -n addons-762402
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-762402 logs -n 25: (1.021087156s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-517015 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-517015   │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │ 09 Nov 25 13:28 UTC │
	│ delete  │ -p download-only-517015                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-517015   │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │ 09 Nov 25 13:28 UTC │
	│ start   │ -o=json --download-only -p download-only-263673 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-263673   │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │ 09 Nov 25 13:28 UTC │
	│ delete  │ -p download-only-263673                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-263673   │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │ 09 Nov 25 13:28 UTC │
	│ delete  │ -p download-only-517015                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-517015   │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │ 09 Nov 25 13:28 UTC │
	│ delete  │ -p download-only-263673                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-263673   │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │ 09 Nov 25 13:28 UTC │
	│ start   │ --download-only -p download-docker-824434 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-824434 │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │                     │
	│ delete  │ -p download-docker-824434                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-824434 │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │ 09 Nov 25 13:28 UTC │
	│ start   │ --download-only -p binary-mirror-557048 --alsologtostderr --binary-mirror http://127.0.0.1:42397 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-557048   │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │                     │
	│ delete  │ -p binary-mirror-557048                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-557048   │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │ 09 Nov 25 13:28 UTC │
	│ addons  │ enable dashboard -p addons-762402                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-762402          │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │                     │
	│ addons  │ disable dashboard -p addons-762402                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-762402          │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │                     │
	│ start   │ -p addons-762402 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-762402          │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │ 09 Nov 25 13:30 UTC │
	│ addons  │ addons-762402 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-762402          │ jenkins │ v1.37.0 │ 09 Nov 25 13:30 UTC │                     │
	│ addons  │ addons-762402 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-762402          │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-762402 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-762402          │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:28:51
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:28:51.600069   10696 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:28:51.600301   10696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:28:51.600310   10696 out.go:374] Setting ErrFile to fd 2...
	I1109 13:28:51.600317   10696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:28:51.600491   10696 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:28:51.600961   10696 out.go:368] Setting JSON to false
	I1109 13:28:51.601742   10696 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":682,"bootTime":1762694250,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 13:28:51.601814   10696 start.go:143] virtualization: kvm guest
	I1109 13:28:51.603408   10696 out.go:179] * [addons-762402] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 13:28:51.604561   10696 notify.go:221] Checking for updates...
	I1109 13:28:51.604578   10696 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:28:51.605700   10696 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:28:51.606781   10696 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 13:28:51.607878   10696 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 13:28:51.608938   10696 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 13:28:51.610065   10696 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:28:51.611385   10696 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:28:51.633924   10696 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 13:28:51.633980   10696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:28:51.685216   10696 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-09 13:28:51.676680974 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 13:28:51.685304   10696 docker.go:319] overlay module found
	I1109 13:28:51.687526   10696 out.go:179] * Using the docker driver based on user configuration
	I1109 13:28:51.688526   10696 start.go:309] selected driver: docker
	I1109 13:28:51.688537   10696 start.go:930] validating driver "docker" against <nil>
	I1109 13:28:51.688546   10696 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:28:51.689040   10696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:28:51.738660   10696 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-09 13:28:51.729890421 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 13:28:51.738844   10696 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 13:28:51.739104   10696 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:28:51.740630   10696 out.go:179] * Using Docker driver with root privileges
	I1109 13:28:51.741825   10696 cni.go:84] Creating CNI manager for ""
	I1109 13:28:51.741876   10696 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 13:28:51.741885   10696 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 13:28:51.741933   10696 start.go:353] cluster config:
	{Name:addons-762402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-762402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1109 13:28:51.743096   10696 out.go:179] * Starting "addons-762402" primary control-plane node in "addons-762402" cluster
	I1109 13:28:51.744237   10696 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 13:28:51.745347   10696 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 13:28:51.746361   10696 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:28:51.746385   10696 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 13:28:51.746383   10696 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 13:28:51.746391   10696 cache.go:65] Caching tarball of preloaded images
	I1109 13:28:51.746461   10696 preload.go:238] Found /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 13:28:51.746471   10696 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 13:28:51.746798   10696 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/config.json ...
	I1109 13:28:51.746832   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/config.json: {Name:mkdd4030f0ca96ade544f1277301cec246e906a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:28:51.761961   10696 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1109 13:28:51.762059   10696 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1109 13:28:51.762074   10696 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1109 13:28:51.762078   10696 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1109 13:28:51.762084   10696 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1109 13:28:51.762091   10696 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from local cache
	I1109 13:29:04.099272   10696 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from cached tarball
	I1109 13:29:04.099317   10696 cache.go:243] Successfully downloaded all kic artifacts
	I1109 13:29:04.099358   10696 start.go:360] acquireMachinesLock for addons-762402: {Name:mkb378b64899117f3c03bff88efab238bc9c3942 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:29:04.099457   10696 start.go:364] duration metric: took 77.657µs to acquireMachinesLock for "addons-762402"
	I1109 13:29:04.099484   10696 start.go:93] Provisioning new machine with config: &{Name:addons-762402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-762402 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:29:04.099573   10696 start.go:125] createHost starting for "" (driver="docker")
	I1109 13:29:04.101685   10696 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1109 13:29:04.101903   10696 start.go:159] libmachine.API.Create for "addons-762402" (driver="docker")
	I1109 13:29:04.101938   10696 client.go:173] LocalClient.Create starting
	I1109 13:29:04.102045   10696 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem
	I1109 13:29:04.275693   10696 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem
	I1109 13:29:04.414253   10696 cli_runner.go:164] Run: docker network inspect addons-762402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 13:29:04.430476   10696 cli_runner.go:211] docker network inspect addons-762402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 13:29:04.430528   10696 network_create.go:284] running [docker network inspect addons-762402] to gather additional debugging logs...
	I1109 13:29:04.430549   10696 cli_runner.go:164] Run: docker network inspect addons-762402
	W1109 13:29:04.445800   10696 cli_runner.go:211] docker network inspect addons-762402 returned with exit code 1
	I1109 13:29:04.445831   10696 network_create.go:287] error running [docker network inspect addons-762402]: docker network inspect addons-762402: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-762402 not found
	I1109 13:29:04.445849   10696 network_create.go:289] output of [docker network inspect addons-762402]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-762402 not found
	
	** /stderr **
	I1109 13:29:04.445947   10696 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 13:29:04.461367   10696 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002009270}
	I1109 13:29:04.461403   10696 network_create.go:124] attempt to create docker network addons-762402 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1109 13:29:04.461446   10696 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-762402 addons-762402
	I1109 13:29:04.513425   10696 network_create.go:108] docker network addons-762402 192.168.49.0/24 created
	I1109 13:29:04.513452   10696 kic.go:121] calculated static IP "192.168.49.2" for the "addons-762402" container
	I1109 13:29:04.513512   10696 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 13:29:04.528365   10696 cli_runner.go:164] Run: docker volume create addons-762402 --label name.minikube.sigs.k8s.io=addons-762402 --label created_by.minikube.sigs.k8s.io=true
	I1109 13:29:04.544338   10696 oci.go:103] Successfully created a docker volume addons-762402
	I1109 13:29:04.544389   10696 cli_runner.go:164] Run: docker run --rm --name addons-762402-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-762402 --entrypoint /usr/bin/test -v addons-762402:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 13:29:10.792361   10696 cli_runner.go:217] Completed: docker run --rm --name addons-762402-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-762402 --entrypoint /usr/bin/test -v addons-762402:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (6.247922954s)
	I1109 13:29:10.792393   10696 oci.go:107] Successfully prepared a docker volume addons-762402
	I1109 13:29:10.792445   10696 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:29:10.792460   10696 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 13:29:10.792526   10696 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-762402:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 13:29:15.027728   10696 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-762402:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.23516231s)
	I1109 13:29:15.027773   10696 kic.go:203] duration metric: took 4.235309729s to extract preloaded images to volume ...
	W1109 13:29:15.027871   10696 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1109 13:29:15.027901   10696 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1109 13:29:15.027937   10696 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 13:29:15.083197   10696 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-762402 --name addons-762402 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-762402 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-762402 --network addons-762402 --ip 192.168.49.2 --volume addons-762402:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 13:29:15.392455   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Running}}
	I1109 13:29:15.408924   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:15.427153   10696 cli_runner.go:164] Run: docker exec addons-762402 stat /var/lib/dpkg/alternatives/iptables
	I1109 13:29:15.469947   10696 oci.go:144] the created container "addons-762402" has a running status.
	I1109 13:29:15.469982   10696 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa...
	I1109 13:29:16.033842   10696 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 13:29:16.057897   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:16.073654   10696 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 13:29:16.073674   10696 kic_runner.go:114] Args: [docker exec --privileged addons-762402 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 13:29:16.114282   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:16.130208   10696 machine.go:94] provisionDockerMachine start ...
	I1109 13:29:16.130288   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:16.146042   10696 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:16.146277   10696 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1109 13:29:16.146293   10696 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 13:29:16.267677   10696 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-762402
	
	I1109 13:29:16.267699   10696 ubuntu.go:182] provisioning hostname "addons-762402"
	I1109 13:29:16.267758   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:16.285314   10696 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:16.285500   10696 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1109 13:29:16.285513   10696 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-762402 && echo "addons-762402" | sudo tee /etc/hostname
	I1109 13:29:16.415536   10696 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-762402
	
	I1109 13:29:16.415596   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:16.432514   10696 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:16.432722   10696 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1109 13:29:16.432739   10696 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-762402' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-762402/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-762402' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 13:29:16.554254   10696 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:29:16.554278   10696 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-5854/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-5854/.minikube}
	I1109 13:29:16.554304   10696 ubuntu.go:190] setting up certificates
	I1109 13:29:16.554313   10696 provision.go:84] configureAuth start
	I1109 13:29:16.554388   10696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-762402
	I1109 13:29:16.570560   10696 provision.go:143] copyHostCerts
	I1109 13:29:16.570627   10696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem (1078 bytes)
	I1109 13:29:16.570771   10696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem (1123 bytes)
	I1109 13:29:16.570847   10696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem (1679 bytes)
	I1109 13:29:16.570918   10696 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem org=jenkins.addons-762402 san=[127.0.0.1 192.168.49.2 addons-762402 localhost minikube]
	I1109 13:29:16.712281   10696 provision.go:177] copyRemoteCerts
	I1109 13:29:16.712335   10696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 13:29:16.712367   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:16.729261   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:16.819704   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 13:29:16.836632   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 13:29:16.851496   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 13:29:16.866651   10696 provision.go:87] duration metric: took 312.316652ms to configureAuth
	I1109 13:29:16.866674   10696 ubuntu.go:206] setting minikube options for container-runtime
	I1109 13:29:16.866806   10696 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:29:16.866888   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:16.883238   10696 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:16.883473   10696 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1109 13:29:16.883497   10696 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 13:29:17.109605   10696 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 13:29:17.109627   10696 machine.go:97] duration metric: took 979.402343ms to provisionDockerMachine
	I1109 13:29:17.109664   10696 client.go:176] duration metric: took 13.007715858s to LocalClient.Create
	I1109 13:29:17.109684   10696 start.go:167] duration metric: took 13.007781712s to libmachine.API.Create "addons-762402"
	I1109 13:29:17.109695   10696 start.go:293] postStartSetup for "addons-762402" (driver="docker")
	I1109 13:29:17.109707   10696 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 13:29:17.109768   10696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 13:29:17.109817   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:17.126600   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:17.217963   10696 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 13:29:17.220946   10696 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 13:29:17.220967   10696 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 13:29:17.220976   10696 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/addons for local assets ...
	I1109 13:29:17.221016   10696 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/files for local assets ...
	I1109 13:29:17.221036   10696 start.go:296] duration metric: took 111.335357ms for postStartSetup
	I1109 13:29:17.221269   10696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-762402
	I1109 13:29:17.238120   10696 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/config.json ...
	I1109 13:29:17.238332   10696 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:29:17.238366   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:17.253826   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:17.341788   10696 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 13:29:17.345817   10696 start.go:128] duration metric: took 13.246232223s to createHost
	I1109 13:29:17.345835   10696 start.go:83] releasing machines lock for "addons-762402", held for 13.246364553s
	I1109 13:29:17.345894   10696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-762402
	I1109 13:29:17.362599   10696 ssh_runner.go:195] Run: cat /version.json
	I1109 13:29:17.362665   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:17.362669   10696 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 13:29:17.362718   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:17.380132   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:17.380262   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:17.528691   10696 ssh_runner.go:195] Run: systemctl --version
	I1109 13:29:17.534292   10696 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 13:29:17.564761   10696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 13:29:17.568813   10696 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 13:29:17.568876   10696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 13:29:17.591955   10696 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1109 13:29:17.591971   10696 start.go:496] detecting cgroup driver to use...
	I1109 13:29:17.591993   10696 detect.go:190] detected "systemd" cgroup driver on host os
	I1109 13:29:17.592030   10696 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 13:29:17.605944   10696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 13:29:17.616507   10696 docker.go:218] disabling cri-docker service (if available) ...
	I1109 13:29:17.616548   10696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 13:29:17.630930   10696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 13:29:17.646055   10696 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 13:29:17.721903   10696 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 13:29:17.802173   10696 docker.go:234] disabling docker service ...
	I1109 13:29:17.802221   10696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 13:29:17.817723   10696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 13:29:17.828570   10696 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 13:29:17.904433   10696 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 13:29:17.980708   10696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 13:29:17.991266   10696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 13:29:18.003629   10696 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 13:29:18.003686   10696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:18.012603   10696 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1109 13:29:18.012659   10696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:18.020531   10696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:18.028227   10696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:18.035792   10696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 13:29:18.042765   10696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:18.050193   10696 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:18.061726   10696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:18.069256   10696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 13:29:18.075781   10696 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1109 13:29:18.075823   10696 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1109 13:29:18.086408   10696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 13:29:18.092836   10696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:29:18.165914   10696 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 13:29:18.263321   10696 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 13:29:18.263387   10696 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 13:29:18.266952   10696 start.go:564] Will wait 60s for crictl version
	I1109 13:29:18.266994   10696 ssh_runner.go:195] Run: which crictl
	I1109 13:29:18.270176   10696 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 13:29:18.292962   10696 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 13:29:18.293055   10696 ssh_runner.go:195] Run: crio --version
	I1109 13:29:18.318013   10696 ssh_runner.go:195] Run: crio --version
	I1109 13:29:18.343928   10696 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 13:29:18.345071   10696 cli_runner.go:164] Run: docker network inspect addons-762402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 13:29:18.361160   10696 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 13:29:18.364725   10696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:29:18.373807   10696 kubeadm.go:884] updating cluster {Name:addons-762402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-762402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 13:29:18.373917   10696 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:29:18.373954   10696 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:29:18.401424   10696 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:29:18.401440   10696 crio.go:433] Images already preloaded, skipping extraction
	I1109 13:29:18.401472   10696 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:29:18.423831   10696 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:29:18.423847   10696 cache_images.go:86] Images are preloaded, skipping loading
	I1109 13:29:18.423854   10696 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1109 13:29:18.423927   10696 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-762402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-762402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 13:29:18.423982   10696 ssh_runner.go:195] Run: crio config
	I1109 13:29:18.465006   10696 cni.go:84] Creating CNI manager for ""
	I1109 13:29:18.465030   10696 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 13:29:18.465049   10696 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 13:29:18.465072   10696 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-762402 NodeName:addons-762402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 13:29:18.465207   10696 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-762402"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 13:29:18.465268   10696 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 13:29:18.472401   10696 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 13:29:18.472449   10696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 13:29:18.479411   10696 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 13:29:18.490717   10696 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 13:29:18.504284   10696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1109 13:29:18.515473   10696 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1109 13:29:18.518634   10696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:29:18.527386   10696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:29:18.605975   10696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:29:18.629524   10696 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402 for IP: 192.168.49.2
	I1109 13:29:18.629545   10696 certs.go:195] generating shared ca certs ...
	I1109 13:29:18.629563   10696 certs.go:227] acquiring lock for ca certs: {Name:mk1fbc7fb5aaf0e87090d75145f91c095ef07289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:18.629714   10696 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key
	I1109 13:29:18.784021   10696 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt ...
	I1109 13:29:18.784046   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt: {Name:mkec03d697f45aeb041c27c88860e2fa28d1fd26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:18.784199   10696 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key ...
	I1109 13:29:18.784209   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key: {Name:mkc8972f7a276c3b9e2064bd653c301100f1c2d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:18.784281   10696 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key
	I1109 13:29:19.153419   10696 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt ...
	I1109 13:29:19.153443   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt: {Name:mk47ed1f12a8fbfc55cbef6d30c0da65835c47ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:19.153611   10696 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key ...
	I1109 13:29:19.153623   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key: {Name:mk89d6a4f617bf3b6cc9fde532fe32e3368602fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:19.153728   10696 certs.go:257] generating profile certs ...
	I1109 13:29:19.153782   10696 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.key
	I1109 13:29:19.153795   10696 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt with IP's: []
	I1109 13:29:19.727372   10696 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt ...
	I1109 13:29:19.727399   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: {Name:mkeac7e44f29a869869e9a50a16f513beb3c0eec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:19.727560   10696 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.key ...
	I1109 13:29:19.727570   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.key: {Name:mk871ff6f1019eadfaa466e0dd5301226c74d694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:19.727654   10696 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.key.f007e2c6
	I1109 13:29:19.727672   10696 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.crt.f007e2c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1109 13:29:19.966032   10696 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.crt.f007e2c6 ...
	I1109 13:29:19.966057   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.crt.f007e2c6: {Name:mk30c7821a4db207a680fad2f35e7f865ebaf808 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:19.966193   10696 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.key.f007e2c6 ...
	I1109 13:29:19.966205   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.key.f007e2c6: {Name:mk6a9b75003de9d61be5a994a207c1ef5db0240a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:19.966275   10696 certs.go:382] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.crt.f007e2c6 -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.crt
	I1109 13:29:19.966350   10696 certs.go:386] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.key.f007e2c6 -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.key
	I1109 13:29:19.966398   10696 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/proxy-client.key
	I1109 13:29:19.966414   10696 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/proxy-client.crt with IP's: []
	I1109 13:29:20.065922   10696 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/proxy-client.crt ...
	I1109 13:29:20.065945   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/proxy-client.crt: {Name:mk64708e7e19aab5fc191499498e0bb88944b34a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:20.066090   10696 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/proxy-client.key ...
	I1109 13:29:20.066100   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/proxy-client.key: {Name:mk172cda03059e7d89d250b1ec8c6cc1f7d6eba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:20.066258   10696 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 13:29:20.066289   10696 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 13:29:20.066312   10696 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 13:29:20.066332   10696 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 13:29:20.066920   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 13:29:20.083722   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 13:29:20.099141   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 13:29:20.114190   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 13:29:20.128873   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1109 13:29:20.143941   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 13:29:20.159152   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 13:29:20.174141   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 13:29:20.189274   10696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 13:29:20.206059   10696 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 13:29:20.216988   10696 ssh_runner.go:195] Run: openssl version
	I1109 13:29:20.222599   10696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 13:29:20.236118   10696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:29:20.239749   10696 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:29:20.239798   10696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:29:20.274314   10696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 13:29:20.281994   10696 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 13:29:20.285189   10696 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 13:29:20.285246   10696 kubeadm.go:401] StartCluster: {Name:addons-762402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-762402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:29:20.285321   10696 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:29:20.285370   10696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:29:20.309578   10696 cri.go:89] found id: ""
	I1109 13:29:20.309629   10696 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 13:29:20.316378   10696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 13:29:20.323127   10696 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 13:29:20.323169   10696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 13:29:20.329783   10696 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 13:29:20.329797   10696 kubeadm.go:158] found existing configuration files:
	
	I1109 13:29:20.329821   10696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 13:29:20.336502   10696 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 13:29:20.336545   10696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 13:29:20.342897   10696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 13:29:20.349498   10696 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 13:29:20.349542   10696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 13:29:20.355920   10696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 13:29:20.362346   10696 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 13:29:20.362389   10696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 13:29:20.368601   10696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 13:29:20.375028   10696 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 13:29:20.375070   10696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 13:29:20.381435   10696 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 13:29:20.413146   10696 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 13:29:20.413194   10696 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 13:29:20.431213   10696 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 13:29:20.431284   10696 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1109 13:29:20.431350   10696 kubeadm.go:319] OS: Linux
	I1109 13:29:20.431449   10696 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 13:29:20.431527   10696 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 13:29:20.431599   10696 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 13:29:20.431683   10696 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 13:29:20.431753   10696 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 13:29:20.431817   10696 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 13:29:20.431900   10696 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 13:29:20.431980   10696 kubeadm.go:319] CGROUPS_IO: enabled
	I1109 13:29:20.482271   10696 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 13:29:20.482391   10696 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 13:29:20.482526   10696 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 13:29:20.489475   10696 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 13:29:20.491279   10696 out.go:252]   - Generating certificates and keys ...
	I1109 13:29:20.491347   10696 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 13:29:20.491405   10696 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 13:29:20.725867   10696 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 13:29:21.379532   10696 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 13:29:21.689892   10696 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 13:29:21.743695   10696 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 13:29:21.979264   10696 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 13:29:21.979442   10696 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-762402 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1109 13:29:22.076345   10696 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 13:29:22.076479   10696 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-762402 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1109 13:29:22.388420   10696 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 13:29:22.751667   10696 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 13:29:22.894049   10696 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 13:29:22.894143   10696 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 13:29:22.926745   10696 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 13:29:23.010543   10696 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 13:29:23.193007   10696 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 13:29:23.516027   10696 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 13:29:23.572292   10696 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 13:29:23.572750   10696 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 13:29:23.576156   10696 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 13:29:23.578164   10696 out.go:252]   - Booting up control plane ...
	I1109 13:29:23.578249   10696 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 13:29:23.578317   10696 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 13:29:23.579140   10696 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 13:29:23.591534   10696 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 13:29:23.591711   10696 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 13:29:23.598539   10696 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 13:29:23.598878   10696 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 13:29:23.598919   10696 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 13:29:23.690236   10696 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 13:29:23.690367   10696 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 13:29:24.691869   10696 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001678637s
	I1109 13:29:24.694685   10696 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 13:29:24.694803   10696 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1109 13:29:24.694957   10696 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 13:29:24.695088   10696 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 13:29:25.523427   10696 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 828.648538ms
	I1109 13:29:26.544133   10696 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.849463088s
	I1109 13:29:28.195677   10696 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.500970932s
	I1109 13:29:28.206094   10696 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 13:29:28.213352   10696 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 13:29:28.220500   10696 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 13:29:28.220792   10696 kubeadm.go:319] [mark-control-plane] Marking the node addons-762402 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 13:29:28.227107   10696 kubeadm.go:319] [bootstrap-token] Using token: yfmz4d.ygaatjqzsyeab290
	I1109 13:29:28.228174   10696 out.go:252]   - Configuring RBAC rules ...
	I1109 13:29:28.228306   10696 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 13:29:28.230729   10696 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 13:29:28.235423   10696 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 13:29:28.237426   10696 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 13:29:28.239438   10696 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 13:29:28.241402   10696 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 13:29:28.600384   10696 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 13:29:29.012474   10696 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 13:29:29.602473   10696 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 13:29:29.603496   10696 kubeadm.go:319] 
	I1109 13:29:29.603582   10696 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 13:29:29.603604   10696 kubeadm.go:319] 
	I1109 13:29:29.603747   10696 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 13:29:29.603763   10696 kubeadm.go:319] 
	I1109 13:29:29.603813   10696 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 13:29:29.603909   10696 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 13:29:29.603991   10696 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 13:29:29.604000   10696 kubeadm.go:319] 
	I1109 13:29:29.604084   10696 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 13:29:29.604094   10696 kubeadm.go:319] 
	I1109 13:29:29.604159   10696 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 13:29:29.604176   10696 kubeadm.go:319] 
	I1109 13:29:29.604252   10696 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 13:29:29.604364   10696 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 13:29:29.604467   10696 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 13:29:29.604479   10696 kubeadm.go:319] 
	I1109 13:29:29.604600   10696 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 13:29:29.604701   10696 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 13:29:29.604708   10696 kubeadm.go:319] 
	I1109 13:29:29.604776   10696 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token yfmz4d.ygaatjqzsyeab290 \
	I1109 13:29:29.604867   10696 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f \
	I1109 13:29:29.604898   10696 kubeadm.go:319] 	--control-plane 
	I1109 13:29:29.604906   10696 kubeadm.go:319] 
	I1109 13:29:29.605005   10696 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 13:29:29.605017   10696 kubeadm.go:319] 
	I1109 13:29:29.605125   10696 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token yfmz4d.ygaatjqzsyeab290 \
	I1109 13:29:29.605248   10696 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f 
	I1109 13:29:29.607094   10696 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1109 13:29:29.607187   10696 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 13:29:29.607205   10696 cni.go:84] Creating CNI manager for ""
	I1109 13:29:29.607212   10696 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 13:29:29.608611   10696 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1109 13:29:29.609687   10696 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 13:29:29.613562   10696 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 13:29:29.613579   10696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1109 13:29:29.625898   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 13:29:29.811744   10696 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 13:29:29.811833   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:29.811843   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-762402 minikube.k8s.io/updated_at=2025_11_09T13_29_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=addons-762402 minikube.k8s.io/primary=true
	I1109 13:29:29.820434   10696 ops.go:34] apiserver oom_adj: -16
	I1109 13:29:29.890847   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:30.391342   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:30.891650   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:31.390970   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:31.891406   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:32.391556   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:32.891129   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:33.391148   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:33.891714   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:34.391462   10696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:34.449561   10696 kubeadm.go:1114] duration metric: took 4.637795505s to wait for elevateKubeSystemPrivileges
	I1109 13:29:34.449600   10696 kubeadm.go:403] duration metric: took 14.164359999s to StartCluster
	I1109 13:29:34.449623   10696 settings.go:142] acquiring lock: {Name:mk4e77b290eba64589404bd2a5f48c72505ab262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:34.449761   10696 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 13:29:34.450184   10696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:34.450369   10696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 13:29:34.450404   10696 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:29:34.450452   10696 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1109 13:29:34.450582   10696 addons.go:70] Setting ingress-dns=true in profile "addons-762402"
	I1109 13:29:34.450602   10696 addons.go:70] Setting inspektor-gadget=true in profile "addons-762402"
	I1109 13:29:34.450619   10696 addons.go:239] Setting addon inspektor-gadget=true in "addons-762402"
	I1109 13:29:34.450620   10696 addons.go:239] Setting addon ingress-dns=true in "addons-762402"
	I1109 13:29:34.450631   10696 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:29:34.450665   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.450674   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.450692   10696 addons.go:70] Setting ingress=true in profile "addons-762402"
	I1109 13:29:34.450702   10696 addons.go:70] Setting default-storageclass=true in profile "addons-762402"
	I1109 13:29:34.450704   10696 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-762402"
	I1109 13:29:34.450684   10696 addons.go:70] Setting gcp-auth=true in profile "addons-762402"
	I1109 13:29:34.450718   10696 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-762402"
	I1109 13:29:34.450737   10696 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-762402"
	I1109 13:29:34.450744   10696 addons.go:70] Setting registry-creds=true in profile "addons-762402"
	I1109 13:29:34.450751   10696 mustload.go:66] Loading cluster: addons-762402
	I1109 13:29:34.450755   10696 addons.go:239] Setting addon registry-creds=true in "addons-762402"
	I1109 13:29:34.450774   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.450803   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.450922   10696 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-762402"
	I1109 13:29:34.450977   10696 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-762402"
	I1109 13:29:34.451007   10696 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:29:34.451047   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.451208   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.451240   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.451254   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.451277   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.451321   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.451326   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.451541   10696 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-762402"
	I1109 13:29:34.451561   10696 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-762402"
	I1109 13:29:34.451586   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.450718   10696 addons.go:70] Setting cloud-spanner=true in profile "addons-762402"
	I1109 13:29:34.451851   10696 addons.go:239] Setting addon cloud-spanner=true in "addons-762402"
	I1109 13:29:34.451866   10696 addons.go:70] Setting volcano=true in profile "addons-762402"
	I1109 13:29:34.451877   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.451884   10696 addons.go:239] Setting addon volcano=true in "addons-762402"
	I1109 13:29:34.451917   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.452073   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.452351   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.452363   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.452714   10696 addons.go:70] Setting metrics-server=true in profile "addons-762402"
	I1109 13:29:34.452760   10696 addons.go:239] Setting addon metrics-server=true in "addons-762402"
	I1109 13:29:34.452785   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.450594   10696 addons.go:70] Setting yakd=true in profile "addons-762402"
	I1109 13:29:34.452848   10696 addons.go:239] Setting addon yakd=true in "addons-762402"
	I1109 13:29:34.452887   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.453233   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.453406   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.453716   10696 addons.go:70] Setting registry=true in profile "addons-762402"
	I1109 13:29:34.453737   10696 addons.go:239] Setting addon registry=true in "addons-762402"
	I1109 13:29:34.453761   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.453818   10696 out.go:179] * Verifying Kubernetes components...
	I1109 13:29:34.450711   10696 addons.go:239] Setting addon ingress=true in "addons-762402"
	I1109 13:29:34.454309   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.454882   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.455683   10696 addons.go:70] Setting storage-provisioner=true in profile "addons-762402"
	I1109 13:29:34.455704   10696 addons.go:239] Setting addon storage-provisioner=true in "addons-762402"
	I1109 13:29:34.455741   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.455915   10696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:29:34.456550   10696 addons.go:70] Setting volumesnapshots=true in profile "addons-762402"
	I1109 13:29:34.456570   10696 addons.go:239] Setting addon volumesnapshots=true in "addons-762402"
	I1109 13:29:34.456594   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.450684   10696 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-762402"
	I1109 13:29:34.457509   10696 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-762402"
	I1109 13:29:34.457542   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.462154   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.462302   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.463135   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.464354   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.510779   10696 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1109 13:29:34.512094   10696 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1109 13:29:34.512373   10696 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1109 13:29:34.512396   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1109 13:29:34.512449   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.513325   10696 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1109 13:29:34.513344   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1109 13:29:34.513392   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.527812   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.530111   10696 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1109 13:29:34.530169   10696 out.go:179]   - Using image docker.io/registry:3.0.0
	I1109 13:29:34.530366   10696 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1109 13:29:34.531937   10696 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1109 13:29:34.531956   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1109 13:29:34.532004   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.532145   10696 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1109 13:29:34.532159   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1109 13:29:34.532221   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.534004   10696 addons.go:239] Setting addon default-storageclass=true in "addons-762402"
	I1109 13:29:34.534073   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.534697   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.536577   10696 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1109 13:29:34.537886   10696 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1109 13:29:34.537947   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1109 13:29:34.538042   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.546841   10696 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1109 13:29:34.546920   10696 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1109 13:29:34.547792   10696 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1109 13:29:34.547806   10696 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1109 13:29:34.547864   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.551621   10696 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1109 13:29:34.551653   10696 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1109 13:29:34.551711   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.561584   10696 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1109 13:29:34.562861   10696 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1109 13:29:34.562881   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1109 13:29:34.562931   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	W1109 13:29:34.567458   10696 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1109 13:29:34.567654   10696 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:29:34.572784   10696 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:29:34.574205   10696 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1109 13:29:34.575585   10696 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-762402"
	I1109 13:29:34.575631   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:34.576068   10696 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1109 13:29:34.579707   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1109 13:29:34.579774   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.580165   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:34.584241   10696 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1109 13:29:34.584306   10696 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1109 13:29:34.584241   10696 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 13:29:34.585397   10696 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1109 13:29:34.585413   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1109 13:29:34.585472   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.586232   10696 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 13:29:34.586247   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 13:29:34.586292   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.590615   10696 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1109 13:29:34.591917   10696 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1109 13:29:34.594219   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.597797   10696 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1109 13:29:34.601673   10696 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1109 13:29:34.602349   10696 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1109 13:29:34.603723   10696 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1109 13:29:34.604747   10696 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1109 13:29:34.605753   10696 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1109 13:29:34.605833   10696 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1109 13:29:34.605860   10696 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1109 13:29:34.605920   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.606733   10696 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1109 13:29:34.606755   10696 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1109 13:29:34.606817   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.606873   10696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 13:29:34.610711   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.613483   10696 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 13:29:34.613502   10696 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 13:29:34.613553   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.615030   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.620964   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.626378   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.626823   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.628771   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.630211   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.642430   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.642597   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.660775   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.663373   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.667693   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	W1109 13:29:34.671251   10696 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 13:29:34.671335   10696 retry.go:31] will retry after 303.365831ms: ssh: handshake failed: EOF
	I1109 13:29:34.672781   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.676130   10696 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1109 13:29:34.677851   10696 out.go:179]   - Using image docker.io/busybox:stable
	I1109 13:29:34.678994   10696 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1109 13:29:34.679050   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1109 13:29:34.679165   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:34.683216   10696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:29:34.712660   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:34.778145   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1109 13:29:34.797130   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1109 13:29:34.800493   10696 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1109 13:29:34.800517   10696 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1109 13:29:34.803596   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 13:29:34.807937   10696 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1109 13:29:34.807965   10696 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1109 13:29:34.815500   10696 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1109 13:29:34.815517   10696 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1109 13:29:34.828206   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1109 13:29:34.835095   10696 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1109 13:29:34.835163   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1109 13:29:34.839882   10696 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1109 13:29:34.839898   10696 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1109 13:29:34.841199   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 13:29:34.845688   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1109 13:29:34.851741   10696 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1109 13:29:34.851759   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1109 13:29:34.851761   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1109 13:29:34.852888   10696 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1109 13:29:34.852941   10696 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1109 13:29:34.858452   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1109 13:29:34.858811   10696 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1109 13:29:34.858828   10696 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1109 13:29:34.867836   10696 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1109 13:29:34.867865   10696 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1109 13:29:34.883531   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1109 13:29:34.906908   10696 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1109 13:29:34.906932   10696 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1109 13:29:34.917610   10696 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1109 13:29:34.917648   10696 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1109 13:29:34.926727   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1109 13:29:34.932575   10696 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1109 13:29:34.932679   10696 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1109 13:29:34.947569   10696 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1109 13:29:34.947601   10696 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1109 13:29:34.972210   10696 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1109 13:29:34.972242   10696 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1109 13:29:34.986121   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1109 13:29:34.992400   10696 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1109 13:29:34.992430   10696 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1109 13:29:35.014539   10696 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1109 13:29:35.014582   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1109 13:29:35.037442   10696 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1109 13:29:35.037467   10696 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1109 13:29:35.045193   10696 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1109 13:29:35.045218   10696 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1109 13:29:35.082980   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1109 13:29:35.099035   10696 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:29:35.099066   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1109 13:29:35.107510   10696 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1109 13:29:35.107530   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1109 13:29:35.140857   10696 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1109 13:29:35.142724   10696 node_ready.go:35] waiting up to 6m0s for node "addons-762402" to be "Ready" ...
	I1109 13:29:35.173219   10696 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1109 13:29:35.173249   10696 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1109 13:29:35.218303   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:29:35.230346   10696 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1109 13:29:35.230434   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1109 13:29:35.231355   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1109 13:29:35.288613   10696 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1109 13:29:35.288636   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1109 13:29:35.325805   10696 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1109 13:29:35.325830   10696 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1109 13:29:35.383887   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1109 13:29:35.651250   10696 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-762402" context rescaled to 1 replicas
	I1109 13:29:35.786915   10696 addons.go:480] Verifying addon registry=true in "addons-762402"
	I1109 13:29:35.787183   10696 addons.go:480] Verifying addon metrics-server=true in "addons-762402"
	I1109 13:29:35.788554   10696 out.go:179] * Verifying registry addon...
	I1109 13:29:35.788620   10696 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-762402 service yakd-dashboard -n yakd-dashboard
	
	I1109 13:29:35.790675   10696 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1109 13:29:35.794008   10696 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1109 13:29:35.794075   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:36.293729   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:36.395392   10696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.177043555s)
	W1109 13:29:36.395446   10696 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1109 13:29:36.395468   10696 retry.go:31] will retry after 290.637821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1109 13:29:36.395480   10696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.164054616s)
	I1109 13:29:36.395497   10696 addons.go:480] Verifying addon ingress=true in "addons-762402"
	I1109 13:29:36.395782   10696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.011841449s)
	I1109 13:29:36.395814   10696 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-762402"
	I1109 13:29:36.397143   10696 out.go:179] * Verifying ingress addon...
	I1109 13:29:36.397145   10696 out.go:179] * Verifying csi-hostpath-driver addon...
	I1109 13:29:36.399702   10696 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1109 13:29:36.400486   10696 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1109 13:29:36.402400   10696 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1109 13:29:36.402414   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:36.403690   10696 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1109 13:29:36.403708   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:36.687175   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:29:36.793940   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:36.902220   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:36.902983   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1109 13:29:37.145013   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:37.293464   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:37.402517   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:37.402635   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:37.793107   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:37.902291   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:37.903097   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:38.293297   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:38.402226   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:38.402891   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:38.793381   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:38.902140   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:38.902731   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:39.103250   10696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.416035732s)
	W1109 13:29:39.145127   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:39.293383   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:39.402444   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:39.402943   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:39.793074   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:39.902394   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:39.903174   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:40.293269   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:40.402212   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:40.403019   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:40.792955   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:40.902009   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:40.902762   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:41.293288   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:41.402046   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:41.402852   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1109 13:29:41.644597   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:41.793413   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:41.902712   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:41.902724   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:42.137131   10696 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1109 13:29:42.137189   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:42.154744   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:42.250672   10696 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1109 13:29:42.261996   10696 addons.go:239] Setting addon gcp-auth=true in "addons-762402"
	I1109 13:29:42.262041   10696 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:29:42.262354   10696 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:29:42.279234   10696 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1109 13:29:42.279279   10696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:29:42.294261   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:42.296058   10696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:29:42.384804   10696 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:29:42.386101   10696 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1109 13:29:42.387067   10696 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1109 13:29:42.387082   10696 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1109 13:29:42.398866   10696 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1109 13:29:42.398882   10696 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1109 13:29:42.402330   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:42.402746   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:42.411222   10696 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1109 13:29:42.411235   10696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1109 13:29:42.422792   10696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1109 13:29:42.700493   10696 addons.go:480] Verifying addon gcp-auth=true in "addons-762402"
	I1109 13:29:42.701980   10696 out.go:179] * Verifying gcp-auth addon...
	I1109 13:29:42.703660   10696 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1109 13:29:42.705769   10696 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1109 13:29:42.705789   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:42.793595   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:42.902625   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:42.902760   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:43.205769   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:43.293447   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:43.402736   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:43.402767   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1109 13:29:43.645627   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:43.706936   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:43.807390   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:43.908083   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:43.908196   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:44.206064   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:44.292855   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:44.401936   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:44.402715   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:44.706265   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:44.793505   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:44.902618   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:44.902881   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:45.205864   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:45.292587   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:45.402841   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:45.402971   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:45.706125   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:45.793079   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:45.902805   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:45.903111   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1109 13:29:46.145229   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:46.206078   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:46.292861   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:46.402128   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:46.402948   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:46.706063   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:46.793088   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:46.902308   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:46.903033   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:47.206250   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:47.293120   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:47.402374   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:47.403047   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:47.706352   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:47.793716   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:47.901996   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:47.903246   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1109 13:29:48.145524   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:48.206813   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:48.292677   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:48.402757   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:48.402808   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:48.706032   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:48.793002   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:48.902146   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:48.902960   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:49.206390   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:49.293227   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:49.402324   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:49.403266   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:49.706831   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:49.792725   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:49.902104   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:49.902692   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:50.206438   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:50.293367   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:50.402608   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:50.402658   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1109 13:29:50.644912   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:50.705934   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:50.792922   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:50.901903   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:50.902907   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:51.206081   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:51.292809   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:51.401672   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:51.402627   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:51.705908   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:51.806404   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:51.907081   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:51.907084   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:52.206236   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:52.293089   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:52.402105   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:52.403035   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1109 13:29:52.645226   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:52.706097   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:52.793116   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:52.902160   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:52.903157   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:53.206114   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:53.293089   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:53.401983   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:53.402983   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:53.706098   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:53.793160   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:53.902760   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:53.903207   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:54.206380   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:54.293279   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:54.402393   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:54.403244   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1109 13:29:54.645665   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:54.706468   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:54.793620   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:54.902937   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:54.903125   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:55.206411   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:55.293293   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:55.402387   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:55.402403   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:55.706467   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:55.793313   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:55.902813   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:55.902866   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:56.205977   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:56.292629   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:56.402914   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:56.402968   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:56.705876   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:56.792604   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:56.902684   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:56.902685   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1109 13:29:57.144940   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:57.205813   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:57.292564   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:57.402861   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:57.403002   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:57.706187   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:57.793041   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:57.902275   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:57.902986   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:58.206220   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:58.292974   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:58.402270   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:58.403117   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:58.706135   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:58.793061   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:58.902074   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:58.903080   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1109 13:29:59.145421   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:29:59.206420   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:59.293156   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:59.402116   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:29:59.403096   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:59.706303   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:29:59.793266   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:29:59.902578   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:29:59.902600   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:00.205904   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:00.292732   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:00.401610   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:00.402531   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:00.705890   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:00.792838   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:00.902071   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:00.902976   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:01.206192   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:01.293414   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:01.402822   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:01.402959   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1109 13:30:01.645106   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:30:01.706249   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:01.793201   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:01.902161   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:01.903196   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:02.206118   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:02.292906   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:02.401973   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:02.402819   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:02.705978   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:02.793057   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:02.902356   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:02.903041   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:03.206036   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:03.293024   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:03.402223   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:03.402961   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:03.705560   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:03.793910   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:03.902378   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:03.903060   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1109 13:30:04.145621   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:30:04.206564   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:04.293513   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:04.403053   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:04.403128   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:04.706580   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:04.793713   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:04.903034   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:04.903057   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:05.206708   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:05.293609   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:05.402934   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:05.403078   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:05.706148   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:05.793309   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:05.902371   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:05.902462   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1109 13:30:06.145735   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:30:06.206812   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:06.293675   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:06.403002   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:06.403003   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:06.706155   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:06.793107   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:06.902101   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:06.902914   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:07.206076   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:07.293089   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:07.402200   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:07.403120   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:07.705832   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:07.792683   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:07.902848   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:07.902939   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:08.205913   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:08.292865   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:08.401913   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:08.402740   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1109 13:30:08.645106   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:30:08.706052   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:08.792899   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:08.902442   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:08.903135   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:09.206354   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:09.293363   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:09.402622   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:09.402622   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:09.706508   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:09.793590   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:09.902658   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:09.902671   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:10.205564   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:10.293405   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:10.402451   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:10.402707   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1109 13:30:10.645971   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:30:10.705972   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:10.793146   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:10.902288   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:10.903205   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:11.206319   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:11.293179   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:11.402211   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:11.403043   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:11.706097   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:11.793061   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:11.902053   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:11.902973   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:12.206300   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:12.293324   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:12.402415   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:12.402493   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:12.706554   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:12.793429   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:12.902608   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:12.902716   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1109 13:30:13.145843   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:30:13.205929   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:13.292591   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:13.402745   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:13.402791   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:13.706796   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:13.792669   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:13.902710   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:13.902746   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:14.206558   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:14.293567   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:14.403041   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:14.403047   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:14.706178   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:14.793300   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:14.902895   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:14.903043   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:15.205878   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:15.292576   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:15.402512   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:15.402611   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1109 13:30:15.645815   10696 node_ready.go:57] node "addons-762402" has "Ready":"False" status (will retry)
	I1109 13:30:15.705933   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:15.793049   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:15.902481   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:15.903424   10696 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1109 13:30:15.903444   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:16.146132   10696 node_ready.go:49] node "addons-762402" is "Ready"
	I1109 13:30:16.146166   10696 node_ready.go:38] duration metric: took 41.003417549s for node "addons-762402" to be "Ready" ...
	I1109 13:30:16.146182   10696 api_server.go:52] waiting for apiserver process to appear ...
	I1109 13:30:16.146236   10696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:30:16.165826   10696 api_server.go:72] duration metric: took 41.715389771s to wait for apiserver process to appear ...
	I1109 13:30:16.165854   10696 api_server.go:88] waiting for apiserver healthz status ...
	I1109 13:30:16.165877   10696 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 13:30:16.170981   10696 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1109 13:30:16.172162   10696 api_server.go:141] control plane version: v1.34.1
	I1109 13:30:16.172191   10696 api_server.go:131] duration metric: took 6.329717ms to wait for apiserver health ...
	I1109 13:30:16.172202   10696 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 13:30:16.180912   10696 system_pods.go:59] 20 kube-system pods found
	I1109 13:30:16.180950   10696 system_pods.go:61] "amd-gpu-device-plugin-8nlkf" [c0515672-7870-4ffa-ade6-67e738f1ba34] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1109 13:30:16.180961   10696 system_pods.go:61] "coredns-66bc5c9577-lqlkm" [467ba0f6-9cb9-4b24-bc84-f1df381f2394] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:30:16.180972   10696 system_pods.go:61] "csi-hostpath-attacher-0" [950ae3e5-88d3-427d-b6ee-dbf6049459ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:30:16.180981   10696 system_pods.go:61] "csi-hostpath-resizer-0" [ef9e64b5-fbd2-4c9a-b8d2-8fa1712a7f2b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1109 13:30:16.180989   10696 system_pods.go:61] "csi-hostpathplugin-77pp6" [1b925317-a091-4b5c-b511-1d166aa717c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:30:16.180996   10696 system_pods.go:61] "etcd-addons-762402" [a4ed6417-0e73-4b80-8a1b-dff340148bca] Running
	I1109 13:30:16.181002   10696 system_pods.go:61] "kindnet-qcnps" [7eafef21-09ff-47a7-aff8-f25939707e51] Running
	I1109 13:30:16.181006   10696 system_pods.go:61] "kube-apiserver-addons-762402" [79d2841f-44e3-45a6-8315-7af6c941659e] Running
	I1109 13:30:16.181011   10696 system_pods.go:61] "kube-controller-manager-addons-762402" [7305d7af-58fb-4343-9207-62d64eede80c] Running
	I1109 13:30:16.181020   10696 system_pods.go:61] "kube-ingress-dns-minikube" [f4e40e66-3fff-46bc-a4f1-38f02c8f1754] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:30:16.181025   10696 system_pods.go:61] "kube-proxy-8b626" [51cf1548-03f3-44c6-b3d0-c61fd2e5daf8] Running
	I1109 13:30:16.181030   10696 system_pods.go:61] "kube-scheduler-addons-762402" [8f39a5bd-d0ee-4980-8d6c-913fa617bd89] Running
	I1109 13:30:16.181036   10696 system_pods.go:61] "metrics-server-85b7d694d7-992g6" [bf11616a-d50d-49dd-b4f5-deed14c6349b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:30:16.181045   10696 system_pods.go:61] "nvidia-device-plugin-daemonset-rrlcz" [10946875-574c-4b8c-acb0-212811b4316d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:30:16.181053   10696 system_pods.go:61] "registry-6b586f9694-xvmzk" [c2c01b58-6e2b-43ac-a6be-0f2a6af3bf46] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:30:16.181060   10696 system_pods.go:61] "registry-creds-764b6fb674-2gshl" [efb22ccb-8a7a-4184-a1eb-861b2390bf36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:30:16.181074   10696 system_pods.go:61] "registry-proxy-z7stg" [3ff06e00-5ad2-47cb-afd2-0c1ef8f5dc44] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:30:16.181083   10696 system_pods.go:61] "snapshot-controller-7d9fbc56b8-f24q2" [704ad561-732c-455f-9f0a-a6d37202431a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:16.181091   10696 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jcz8h" [c845fd60-1aa2-4c15-9601-bebc1eed8ab2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:16.181098   10696 system_pods.go:61] "storage-provisioner" [31a1cec1-234a-4701-adfb-cb2e1a5522e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:30:16.181106   10696 system_pods.go:74] duration metric: took 8.897082ms to wait for pod list to return data ...
	I1109 13:30:16.181114   10696 default_sa.go:34] waiting for default service account to be created ...
	I1109 13:30:16.185371   10696 default_sa.go:45] found service account: "default"
	I1109 13:30:16.185391   10696 default_sa.go:55] duration metric: took 4.270596ms for default service account to be created ...
	I1109 13:30:16.185401   10696 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 13:30:16.281072   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:16.282919   10696 system_pods.go:86] 20 kube-system pods found
	I1109 13:30:16.282986   10696 system_pods.go:89] "amd-gpu-device-plugin-8nlkf" [c0515672-7870-4ffa-ade6-67e738f1ba34] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1109 13:30:16.283010   10696 system_pods.go:89] "coredns-66bc5c9577-lqlkm" [467ba0f6-9cb9-4b24-bc84-f1df381f2394] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:30:16.283029   10696 system_pods.go:89] "csi-hostpath-attacher-0" [950ae3e5-88d3-427d-b6ee-dbf6049459ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:30:16.283049   10696 system_pods.go:89] "csi-hostpath-resizer-0" [ef9e64b5-fbd2-4c9a-b8d2-8fa1712a7f2b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1109 13:30:16.283068   10696 system_pods.go:89] "csi-hostpathplugin-77pp6" [1b925317-a091-4b5c-b511-1d166aa717c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:30:16.283077   10696 system_pods.go:89] "etcd-addons-762402" [a4ed6417-0e73-4b80-8a1b-dff340148bca] Running
	I1109 13:30:16.283084   10696 system_pods.go:89] "kindnet-qcnps" [7eafef21-09ff-47a7-aff8-f25939707e51] Running
	I1109 13:30:16.283090   10696 system_pods.go:89] "kube-apiserver-addons-762402" [79d2841f-44e3-45a6-8315-7af6c941659e] Running
	I1109 13:30:16.283095   10696 system_pods.go:89] "kube-controller-manager-addons-762402" [7305d7af-58fb-4343-9207-62d64eede80c] Running
	I1109 13:30:16.283105   10696 system_pods.go:89] "kube-ingress-dns-minikube" [f4e40e66-3fff-46bc-a4f1-38f02c8f1754] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:30:16.283110   10696 system_pods.go:89] "kube-proxy-8b626" [51cf1548-03f3-44c6-b3d0-c61fd2e5daf8] Running
	I1109 13:30:16.283118   10696 system_pods.go:89] "kube-scheduler-addons-762402" [8f39a5bd-d0ee-4980-8d6c-913fa617bd89] Running
	I1109 13:30:16.283126   10696 system_pods.go:89] "metrics-server-85b7d694d7-992g6" [bf11616a-d50d-49dd-b4f5-deed14c6349b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:30:16.283135   10696 system_pods.go:89] "nvidia-device-plugin-daemonset-rrlcz" [10946875-574c-4b8c-acb0-212811b4316d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:30:16.283143   10696 system_pods.go:89] "registry-6b586f9694-xvmzk" [c2c01b58-6e2b-43ac-a6be-0f2a6af3bf46] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:30:16.283152   10696 system_pods.go:89] "registry-creds-764b6fb674-2gshl" [efb22ccb-8a7a-4184-a1eb-861b2390bf36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:30:16.283159   10696 system_pods.go:89] "registry-proxy-z7stg" [3ff06e00-5ad2-47cb-afd2-0c1ef8f5dc44] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:30:16.283167   10696 system_pods.go:89] "snapshot-controller-7d9fbc56b8-f24q2" [704ad561-732c-455f-9f0a-a6d37202431a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:16.283175   10696 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jcz8h" [c845fd60-1aa2-4c15-9601-bebc1eed8ab2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:16.283184   10696 system_pods.go:89] "storage-provisioner" [31a1cec1-234a-4701-adfb-cb2e1a5522e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:30:16.283203   10696 retry.go:31] will retry after 207.228037ms: missing components: kube-dns
	I1109 13:30:16.379595   10696 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1109 13:30:16.379617   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:16.403046   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:16.403169   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:16.499868   10696 system_pods.go:86] 20 kube-system pods found
	I1109 13:30:16.499915   10696 system_pods.go:89] "amd-gpu-device-plugin-8nlkf" [c0515672-7870-4ffa-ade6-67e738f1ba34] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1109 13:30:16.499925   10696 system_pods.go:89] "coredns-66bc5c9577-lqlkm" [467ba0f6-9cb9-4b24-bc84-f1df381f2394] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:30:16.499934   10696 system_pods.go:89] "csi-hostpath-attacher-0" [950ae3e5-88d3-427d-b6ee-dbf6049459ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:30:16.499942   10696 system_pods.go:89] "csi-hostpath-resizer-0" [ef9e64b5-fbd2-4c9a-b8d2-8fa1712a7f2b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1109 13:30:16.499950   10696 system_pods.go:89] "csi-hostpathplugin-77pp6" [1b925317-a091-4b5c-b511-1d166aa717c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:30:16.499958   10696 system_pods.go:89] "etcd-addons-762402" [a4ed6417-0e73-4b80-8a1b-dff340148bca] Running
	I1109 13:30:16.499964   10696 system_pods.go:89] "kindnet-qcnps" [7eafef21-09ff-47a7-aff8-f25939707e51] Running
	I1109 13:30:16.499970   10696 system_pods.go:89] "kube-apiserver-addons-762402" [79d2841f-44e3-45a6-8315-7af6c941659e] Running
	I1109 13:30:16.499975   10696 system_pods.go:89] "kube-controller-manager-addons-762402" [7305d7af-58fb-4343-9207-62d64eede80c] Running
	I1109 13:30:16.499984   10696 system_pods.go:89] "kube-ingress-dns-minikube" [f4e40e66-3fff-46bc-a4f1-38f02c8f1754] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:30:16.499989   10696 system_pods.go:89] "kube-proxy-8b626" [51cf1548-03f3-44c6-b3d0-c61fd2e5daf8] Running
	I1109 13:30:16.499995   10696 system_pods.go:89] "kube-scheduler-addons-762402" [8f39a5bd-d0ee-4980-8d6c-913fa617bd89] Running
	I1109 13:30:16.500002   10696 system_pods.go:89] "metrics-server-85b7d694d7-992g6" [bf11616a-d50d-49dd-b4f5-deed14c6349b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:30:16.500011   10696 system_pods.go:89] "nvidia-device-plugin-daemonset-rrlcz" [10946875-574c-4b8c-acb0-212811b4316d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:30:16.500021   10696 system_pods.go:89] "registry-6b586f9694-xvmzk" [c2c01b58-6e2b-43ac-a6be-0f2a6af3bf46] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:30:16.500028   10696 system_pods.go:89] "registry-creds-764b6fb674-2gshl" [efb22ccb-8a7a-4184-a1eb-861b2390bf36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:30:16.500048   10696 system_pods.go:89] "registry-proxy-z7stg" [3ff06e00-5ad2-47cb-afd2-0c1ef8f5dc44] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:30:16.500057   10696 system_pods.go:89] "snapshot-controller-7d9fbc56b8-f24q2" [704ad561-732c-455f-9f0a-a6d37202431a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:16.500066   10696 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jcz8h" [c845fd60-1aa2-4c15-9601-bebc1eed8ab2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:16.500073   10696 system_pods.go:89] "storage-provisioner" [31a1cec1-234a-4701-adfb-cb2e1a5522e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:30:16.500089   10696 retry.go:31] will retry after 251.088942ms: missing components: kube-dns
	I1109 13:30:16.707410   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:16.755591   10696 system_pods.go:86] 20 kube-system pods found
	I1109 13:30:16.755629   10696 system_pods.go:89] "amd-gpu-device-plugin-8nlkf" [c0515672-7870-4ffa-ade6-67e738f1ba34] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1109 13:30:16.755657   10696 system_pods.go:89] "coredns-66bc5c9577-lqlkm" [467ba0f6-9cb9-4b24-bc84-f1df381f2394] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:30:16.755668   10696 system_pods.go:89] "csi-hostpath-attacher-0" [950ae3e5-88d3-427d-b6ee-dbf6049459ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:30:16.755678   10696 system_pods.go:89] "csi-hostpath-resizer-0" [ef9e64b5-fbd2-4c9a-b8d2-8fa1712a7f2b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1109 13:30:16.755688   10696 system_pods.go:89] "csi-hostpathplugin-77pp6" [1b925317-a091-4b5c-b511-1d166aa717c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:30:16.755694   10696 system_pods.go:89] "etcd-addons-762402" [a4ed6417-0e73-4b80-8a1b-dff340148bca] Running
	I1109 13:30:16.755701   10696 system_pods.go:89] "kindnet-qcnps" [7eafef21-09ff-47a7-aff8-f25939707e51] Running
	I1109 13:30:16.755706   10696 system_pods.go:89] "kube-apiserver-addons-762402" [79d2841f-44e3-45a6-8315-7af6c941659e] Running
	I1109 13:30:16.755712   10696 system_pods.go:89] "kube-controller-manager-addons-762402" [7305d7af-58fb-4343-9207-62d64eede80c] Running
	I1109 13:30:16.755725   10696 system_pods.go:89] "kube-ingress-dns-minikube" [f4e40e66-3fff-46bc-a4f1-38f02c8f1754] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:30:16.755731   10696 system_pods.go:89] "kube-proxy-8b626" [51cf1548-03f3-44c6-b3d0-c61fd2e5daf8] Running
	I1109 13:30:16.755736   10696 system_pods.go:89] "kube-scheduler-addons-762402" [8f39a5bd-d0ee-4980-8d6c-913fa617bd89] Running
	I1109 13:30:16.755744   10696 system_pods.go:89] "metrics-server-85b7d694d7-992g6" [bf11616a-d50d-49dd-b4f5-deed14c6349b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:30:16.755754   10696 system_pods.go:89] "nvidia-device-plugin-daemonset-rrlcz" [10946875-574c-4b8c-acb0-212811b4316d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:30:16.755762   10696 system_pods.go:89] "registry-6b586f9694-xvmzk" [c2c01b58-6e2b-43ac-a6be-0f2a6af3bf46] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:30:16.755774   10696 system_pods.go:89] "registry-creds-764b6fb674-2gshl" [efb22ccb-8a7a-4184-a1eb-861b2390bf36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:30:16.755782   10696 system_pods.go:89] "registry-proxy-z7stg" [3ff06e00-5ad2-47cb-afd2-0c1ef8f5dc44] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:30:16.755795   10696 system_pods.go:89] "snapshot-controller-7d9fbc56b8-f24q2" [704ad561-732c-455f-9f0a-a6d37202431a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:16.755806   10696 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jcz8h" [c845fd60-1aa2-4c15-9601-bebc1eed8ab2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:16.755814   10696 system_pods.go:89] "storage-provisioner" [31a1cec1-234a-4701-adfb-cb2e1a5522e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:30:16.755832   10696 retry.go:31] will retry after 455.996461ms: missing components: kube-dns
	I1109 13:30:16.793452   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:16.903352   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:16.903413   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:17.207298   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:17.215391   10696 system_pods.go:86] 20 kube-system pods found
	I1109 13:30:17.215422   10696 system_pods.go:89] "amd-gpu-device-plugin-8nlkf" [c0515672-7870-4ffa-ade6-67e738f1ba34] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1109 13:30:17.215429   10696 system_pods.go:89] "coredns-66bc5c9577-lqlkm" [467ba0f6-9cb9-4b24-bc84-f1df381f2394] Running
	I1109 13:30:17.215436   10696 system_pods.go:89] "csi-hostpath-attacher-0" [950ae3e5-88d3-427d-b6ee-dbf6049459ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:30:17.215441   10696 system_pods.go:89] "csi-hostpath-resizer-0" [ef9e64b5-fbd2-4c9a-b8d2-8fa1712a7f2b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1109 13:30:17.215447   10696 system_pods.go:89] "csi-hostpathplugin-77pp6" [1b925317-a091-4b5c-b511-1d166aa717c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:30:17.215451   10696 system_pods.go:89] "etcd-addons-762402" [a4ed6417-0e73-4b80-8a1b-dff340148bca] Running
	I1109 13:30:17.215455   10696 system_pods.go:89] "kindnet-qcnps" [7eafef21-09ff-47a7-aff8-f25939707e51] Running
	I1109 13:30:17.215459   10696 system_pods.go:89] "kube-apiserver-addons-762402" [79d2841f-44e3-45a6-8315-7af6c941659e] Running
	I1109 13:30:17.215462   10696 system_pods.go:89] "kube-controller-manager-addons-762402" [7305d7af-58fb-4343-9207-62d64eede80c] Running
	I1109 13:30:17.215466   10696 system_pods.go:89] "kube-ingress-dns-minikube" [f4e40e66-3fff-46bc-a4f1-38f02c8f1754] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:30:17.215471   10696 system_pods.go:89] "kube-proxy-8b626" [51cf1548-03f3-44c6-b3d0-c61fd2e5daf8] Running
	I1109 13:30:17.215475   10696 system_pods.go:89] "kube-scheduler-addons-762402" [8f39a5bd-d0ee-4980-8d6c-913fa617bd89] Running
	I1109 13:30:17.215480   10696 system_pods.go:89] "metrics-server-85b7d694d7-992g6" [bf11616a-d50d-49dd-b4f5-deed14c6349b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:30:17.215487   10696 system_pods.go:89] "nvidia-device-plugin-daemonset-rrlcz" [10946875-574c-4b8c-acb0-212811b4316d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:30:17.215492   10696 system_pods.go:89] "registry-6b586f9694-xvmzk" [c2c01b58-6e2b-43ac-a6be-0f2a6af3bf46] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:30:17.215497   10696 system_pods.go:89] "registry-creds-764b6fb674-2gshl" [efb22ccb-8a7a-4184-a1eb-861b2390bf36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:30:17.215504   10696 system_pods.go:89] "registry-proxy-z7stg" [3ff06e00-5ad2-47cb-afd2-0c1ef8f5dc44] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:30:17.215509   10696 system_pods.go:89] "snapshot-controller-7d9fbc56b8-f24q2" [704ad561-732c-455f-9f0a-a6d37202431a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:17.215516   10696 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jcz8h" [c845fd60-1aa2-4c15-9601-bebc1eed8ab2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:17.215522   10696 system_pods.go:89] "storage-provisioner" [31a1cec1-234a-4701-adfb-cb2e1a5522e0] Running
	I1109 13:30:17.215529   10696 system_pods.go:126] duration metric: took 1.030122205s to wait for k8s-apps to be running ...
	I1109 13:30:17.215536   10696 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 13:30:17.215573   10696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:30:17.227825   10696 system_svc.go:56] duration metric: took 12.281992ms WaitForService to wait for kubelet
	I1109 13:30:17.227851   10696 kubeadm.go:587] duration metric: took 42.777420022s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:30:17.227872   10696 node_conditions.go:102] verifying NodePressure condition ...
	I1109 13:30:17.230044   10696 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 13:30:17.230078   10696 node_conditions.go:123] node cpu capacity is 8
	I1109 13:30:17.230092   10696 node_conditions.go:105] duration metric: took 2.210112ms to run NodePressure ...
	I1109 13:30:17.230109   10696 start.go:242] waiting for startup goroutines ...
	I1109 13:30:17.308431   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:17.402844   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:17.402920   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:17.707311   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:17.793410   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:17.903416   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:17.903477   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:18.207855   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:18.293709   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:18.403305   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:18.403401   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:18.706547   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:18.793763   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:18.903061   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:18.903131   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:19.208710   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:19.294741   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:19.403548   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:19.404980   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:19.707089   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:19.806814   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:19.902355   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:19.903067   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:20.206856   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:20.293351   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:20.403668   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:20.403738   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:20.707335   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:20.794301   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:20.903158   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:20.903227   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:21.207154   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:21.293972   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:21.402981   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:21.403842   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:21.706619   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:21.794607   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:21.903613   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:21.903890   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:22.206484   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:22.294288   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:22.403748   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:22.403777   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:22.707310   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:22.794244   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:22.902937   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:22.903030   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:23.206323   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:23.293408   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:23.403617   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:23.403674   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:23.706766   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:23.793107   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:23.903100   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:23.903867   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:24.206953   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:24.294023   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:24.405980   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:24.407049   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:24.706992   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:24.793848   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:24.903560   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:24.903655   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:25.207513   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:25.294116   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:25.403251   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:25.403731   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:25.707401   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:25.793964   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:25.902668   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:25.903296   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:26.206788   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:26.292703   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:26.403274   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:26.403348   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:26.706552   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:26.795075   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:26.903040   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:26.903492   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:27.207865   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:27.294087   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:27.403088   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:27.406245   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:27.706421   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:27.794244   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:27.903042   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:27.903155   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:28.207360   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:28.307539   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:28.531400   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:28.531598   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:28.706813   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:28.792845   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:28.903167   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:28.903315   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:29.206921   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:29.293340   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:29.404107   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:29.404594   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:29.706379   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:29.793866   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:29.902393   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:29.903254   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:30.210588   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:30.294351   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:30.403252   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:30.403446   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:30.707015   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:30.793198   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:30.903049   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:30.903714   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:31.207341   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:31.293996   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:31.402775   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:31.403432   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:31.706894   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:31.807272   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:31.907677   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:31.907701   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:32.206323   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:32.293946   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:32.402702   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:32.403173   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:32.706749   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:32.794230   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:32.903195   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:32.903214   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:33.207181   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:33.293563   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:33.403296   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:33.403294   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:33.707101   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:33.793516   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:33.902708   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:33.902857   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:34.206899   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:34.307043   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:34.407940   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:34.408062   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:34.706327   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:34.793376   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:34.903015   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:34.903028   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:35.207082   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:35.293705   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:35.403716   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:35.403926   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:35.707989   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:35.793874   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:35.902729   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:35.903272   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:36.207188   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:36.293831   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:36.403353   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:36.403430   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:36.707219   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:36.793811   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:36.903568   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:36.903623   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:37.207362   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:37.293381   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:37.402547   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:37.402733   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:37.706975   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:37.793358   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:37.903229   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:37.903315   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:38.207323   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:38.308162   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:38.402382   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:38.403254   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:38.707593   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:38.794311   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:38.903483   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:38.903521   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:39.207320   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:39.293275   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:39.402945   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:39.403082   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:39.706497   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:39.793999   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:39.902544   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:39.903355   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:40.207490   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:40.294151   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:40.403546   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:40.403718   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:40.710422   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:40.794453   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:40.906173   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:40.906536   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:41.281050   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:41.294242   10696 kapi.go:107] duration metric: took 1m5.503565075s to wait for kubernetes.io/minikube-addons=registry ...
	I1109 13:30:41.402844   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:41.402934   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:41.754949   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:41.902978   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:41.903778   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:42.206912   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:42.402597   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:42.403587   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:42.706800   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:42.902892   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:42.903362   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:43.206827   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:43.403377   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:43.403551   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:43.706719   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:43.903384   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:43.903419   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:44.207297   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:44.403306   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:44.403349   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:44.707547   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:44.903447   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:44.903524   10696 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:45.206955   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:45.402557   10696 kapi.go:107] duration metric: took 1m9.002850514s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1109 13:30:45.403940   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:45.740232   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:45.904339   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:46.207090   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:46.404375   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:46.846381   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:46.903812   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:47.206946   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:47.404265   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:47.707418   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:47.904515   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:48.206521   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:48.403197   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:48.707633   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:48.904414   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:49.207025   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:49.403673   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:49.707232   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:49.903889   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:50.207112   10696 kapi.go:107] duration metric: took 1m7.503446673s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1109 13:30:50.208544   10696 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-762402 cluster.
	I1109 13:30:50.209744   10696 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1109 13:30:50.210997   10696 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1109 13:30:50.404610   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:50.903403   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:51.404066   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:51.904063   10696 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:52.403267   10696 kapi.go:107] duration metric: took 1m16.002779738s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1109 13:30:52.404684   10696 out.go:179] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, registry-creds, amd-gpu-device-plugin, default-storageclass, inspektor-gadget, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1109 13:30:52.405663   10696 addons.go:515] duration metric: took 1m17.955207788s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner registry-creds amd-gpu-device-plugin default-storageclass inspektor-gadget nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1109 13:30:52.405710   10696 start.go:247] waiting for cluster config update ...
	I1109 13:30:52.405735   10696 start.go:256] writing updated cluster config ...
	I1109 13:30:52.405999   10696 ssh_runner.go:195] Run: rm -f paused
	I1109 13:30:52.409879   10696 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 13:30:52.412342   10696 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lqlkm" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:52.415631   10696 pod_ready.go:94] pod "coredns-66bc5c9577-lqlkm" is "Ready"
	I1109 13:30:52.415658   10696 pod_ready.go:86] duration metric: took 3.29574ms for pod "coredns-66bc5c9577-lqlkm" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:52.417162   10696 pod_ready.go:83] waiting for pod "etcd-addons-762402" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:52.420179   10696 pod_ready.go:94] pod "etcd-addons-762402" is "Ready"
	I1109 13:30:52.420195   10696 pod_ready.go:86] duration metric: took 3.015876ms for pod "etcd-addons-762402" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:52.421761   10696 pod_ready.go:83] waiting for pod "kube-apiserver-addons-762402" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:52.424705   10696 pod_ready.go:94] pod "kube-apiserver-addons-762402" is "Ready"
	I1109 13:30:52.424720   10696 pod_ready.go:86] duration metric: took 2.944011ms for pod "kube-apiserver-addons-762402" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:52.426125   10696 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-762402" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:52.812725   10696 pod_ready.go:94] pod "kube-controller-manager-addons-762402" is "Ready"
	I1109 13:30:52.812752   10696 pod_ready.go:86] duration metric: took 386.612063ms for pod "kube-controller-manager-addons-762402" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:53.013486   10696 pod_ready.go:83] waiting for pod "kube-proxy-8b626" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:53.413156   10696 pod_ready.go:94] pod "kube-proxy-8b626" is "Ready"
	I1109 13:30:53.413183   10696 pod_ready.go:86] duration metric: took 399.668469ms for pod "kube-proxy-8b626" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:53.613742   10696 pod_ready.go:83] waiting for pod "kube-scheduler-addons-762402" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:54.013593   10696 pod_ready.go:94] pod "kube-scheduler-addons-762402" is "Ready"
	I1109 13:30:54.013620   10696 pod_ready.go:86] duration metric: took 399.854464ms for pod "kube-scheduler-addons-762402" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:30:54.013636   10696 pod_ready.go:40] duration metric: took 1.603734073s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 13:30:54.056474   10696 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 13:30:54.058246   10696 out.go:179] * Done! kubectl is now configured to use "addons-762402" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 13:30:51 addons-762402 crio[772]: time="2025-11-09T13:30:51.896009081Z" level=info msg="Starting container: af27443b9f8960f215197598df9b91ffdc8840cc6699e13907ef2947a7f00d7f" id=c13d1296-ef76-41ac-8ab2-30aa9ceac5b7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 13:30:51 addons-762402 crio[772]: time="2025-11-09T13:30:51.898447326Z" level=info msg="Started container" PID=6130 containerID=af27443b9f8960f215197598df9b91ffdc8840cc6699e13907ef2947a7f00d7f description=kube-system/csi-hostpathplugin-77pp6/csi-snapshotter id=c13d1296-ef76-41ac-8ab2-30aa9ceac5b7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4536052697a3553cbe735224217152977fa391f49988d938ad91dc07cf568643
	Nov 09 13:30:54 addons-762402 crio[772]: time="2025-11-09T13:30:54.85115884Z" level=info msg="Running pod sandbox: default/busybox/POD" id=0f9b8fc7-9a5e-40a3-a712-240074653511 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 13:30:54 addons-762402 crio[772]: time="2025-11-09T13:30:54.85125532Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 13:30:54 addons-762402 crio[772]: time="2025-11-09T13:30:54.857658306Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f9369e0852b6409f8bea6b757f0f56a92dbecda67ba945dc7aa3f77ea2de4fd3 UID:69f0b611-7084-4d17-814a-0ed1e841dc08 NetNS:/var/run/netns/6bc23067-55ce-40df-ba11-c45eca403b1f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d30708}] Aliases:map[]}"
	Nov 09 13:30:54 addons-762402 crio[772]: time="2025-11-09T13:30:54.85769526Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 09 13:30:54 addons-762402 crio[772]: time="2025-11-09T13:30:54.866690664Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f9369e0852b6409f8bea6b757f0f56a92dbecda67ba945dc7aa3f77ea2de4fd3 UID:69f0b611-7084-4d17-814a-0ed1e841dc08 NetNS:/var/run/netns/6bc23067-55ce-40df-ba11-c45eca403b1f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d30708}] Aliases:map[]}"
	Nov 09 13:30:54 addons-762402 crio[772]: time="2025-11-09T13:30:54.866805091Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 09 13:30:54 addons-762402 crio[772]: time="2025-11-09T13:30:54.867835807Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 09 13:30:54 addons-762402 crio[772]: time="2025-11-09T13:30:54.868962415Z" level=info msg="Ran pod sandbox f9369e0852b6409f8bea6b757f0f56a92dbecda67ba945dc7aa3f77ea2de4fd3 with infra container: default/busybox/POD" id=0f9b8fc7-9a5e-40a3-a712-240074653511 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 13:30:54 addons-762402 crio[772]: time="2025-11-09T13:30:54.869829653Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1afc6328-d65d-4cb3-bd4e-17eb8f57eebd name=/runtime.v1.ImageService/ImageStatus
	Nov 09 13:30:54 addons-762402 crio[772]: time="2025-11-09T13:30:54.869922215Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=1afc6328-d65d-4cb3-bd4e-17eb8f57eebd name=/runtime.v1.ImageService/ImageStatus
	Nov 09 13:30:54 addons-762402 crio[772]: time="2025-11-09T13:30:54.869954634Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=1afc6328-d65d-4cb3-bd4e-17eb8f57eebd name=/runtime.v1.ImageService/ImageStatus
	Nov 09 13:30:54 addons-762402 crio[772]: time="2025-11-09T13:30:54.870390842Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b61b7ce7-fc73-4fcd-b3e6-3f97e18d20a7 name=/runtime.v1.ImageService/PullImage
	Nov 09 13:30:54 addons-762402 crio[772]: time="2025-11-09T13:30:54.87185914Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 09 13:30:55 addons-762402 crio[772]: time="2025-11-09T13:30:55.549218478Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=b61b7ce7-fc73-4fcd-b3e6-3f97e18d20a7 name=/runtime.v1.ImageService/PullImage
	Nov 09 13:30:55 addons-762402 crio[772]: time="2025-11-09T13:30:55.549737878Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=afec97f5-527b-4981-9002-f4a8dd384e38 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 13:30:55 addons-762402 crio[772]: time="2025-11-09T13:30:55.551088368Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=40e29f1b-7088-4ea2-97ca-dcef8619add8 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 13:30:55 addons-762402 crio[772]: time="2025-11-09T13:30:55.554330638Z" level=info msg="Creating container: default/busybox/busybox" id=053fef80-9203-4e09-a58d-5ab945b38ff0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 13:30:55 addons-762402 crio[772]: time="2025-11-09T13:30:55.554436574Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 13:30:55 addons-762402 crio[772]: time="2025-11-09T13:30:55.559253233Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 13:30:55 addons-762402 crio[772]: time="2025-11-09T13:30:55.559734025Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 13:30:55 addons-762402 crio[772]: time="2025-11-09T13:30:55.588722503Z" level=info msg="Created container 011e3c5303bab789a2cfbd45398bd68a8496129c420a0a840e5a24f83ae9aaf2: default/busybox/busybox" id=053fef80-9203-4e09-a58d-5ab945b38ff0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 13:30:55 addons-762402 crio[772]: time="2025-11-09T13:30:55.589195693Z" level=info msg="Starting container: 011e3c5303bab789a2cfbd45398bd68a8496129c420a0a840e5a24f83ae9aaf2" id=3f3a6f94-d1ee-4818-8478-5037b9deb4d6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 13:30:55 addons-762402 crio[772]: time="2025-11-09T13:30:55.590805884Z" level=info msg="Started container" PID=6232 containerID=011e3c5303bab789a2cfbd45398bd68a8496129c420a0a840e5a24f83ae9aaf2 description=default/busybox/busybox id=3f3a6f94-d1ee-4818-8478-5037b9deb4d6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f9369e0852b6409f8bea6b757f0f56a92dbecda67ba945dc7aa3f77ea2de4fd3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	011e3c5303bab       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   f9369e0852b64       busybox                                     default
	af27443b9f896       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          11 seconds ago       Running             csi-snapshotter                          0                   4536052697a35       csi-hostpathplugin-77pp6                    kube-system
	faa104590cba3       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          12 seconds ago       Running             csi-provisioner                          0                   4536052697a35       csi-hostpathplugin-77pp6                    kube-system
	4e995ec9dee84       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            13 seconds ago       Running             liveness-probe                           0                   4536052697a35       csi-hostpathplugin-77pp6                    kube-system
	8500f930e3f9e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 13 seconds ago       Running             gcp-auth                                 0                   29df20bcc4c92       gcp-auth-78565c9fb4-6bbn8                   gcp-auth
	acb8a5ac78d47       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             14 seconds ago       Exited              patch                                    2                   6973d9ea9ed08       gcp-auth-certs-patch-ncnbg                  gcp-auth
	b930aa6a12030       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           15 seconds ago       Running             hostpath                                 0                   4536052697a35       csi-hostpathplugin-77pp6                    kube-system
	3c81403e30d89       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            15 seconds ago       Running             gadget                                   0                   83e5aa4c4fcda       gadget-d5mhg                                gadget
	d67da63b5ee91       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                18 seconds ago       Running             node-driver-registrar                    0                   4536052697a35       csi-hostpathplugin-77pp6                    kube-system
	96aed698532bb       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             18 seconds ago       Running             controller                               0                   4c56016694fb0       ingress-nginx-controller-675c5ddd98-6jkpc   ingress-nginx
	0aaed23bb5d29       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              22 seconds ago       Running             registry-proxy                           0                   99b3c34b1d144       registry-proxy-z7stg                        kube-system
	08f89d0732d38       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   24 seconds ago       Running             csi-external-health-monitor-controller   0                   4536052697a35       csi-hostpathplugin-77pp6                    kube-system
	2afddde1486e2       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     24 seconds ago       Running             amd-gpu-device-plugin                    0                   3bfdd366bfc78       amd-gpu-device-plugin-8nlkf                 kube-system
	c2e8f6876246e       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     26 seconds ago       Running             nvidia-device-plugin-ctr                 0                   8b86bcd173c24       nvidia-device-plugin-daemonset-rrlcz        kube-system
	c7d3694d68c9c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   28 seconds ago       Exited              create                                   0                   6afdbd453d9df       gcp-auth-certs-create-fswsp                 gcp-auth
	c947eeaaf49bf       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      29 seconds ago       Running             volume-snapshot-controller               0                   978af21e1ae4d       snapshot-controller-7d9fbc56b8-jcz8h        kube-system
	3c9fabcff63aa       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              29 seconds ago       Running             csi-resizer                              0                   b9622b93a6fef       csi-hostpath-resizer-0                      kube-system
	6325296d296d9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   30 seconds ago       Exited              patch                                    0                   0f04a4fdedfa5       ingress-nginx-admission-patch-f6fqd         ingress-nginx
	057fd0d666013       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             30 seconds ago       Running             csi-attacher                             0                   2bdc8b188c8d3       csi-hostpath-attacher-0                     kube-system
	5e49dd8732922       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      31 seconds ago       Running             volume-snapshot-controller               0                   c0311eb2628f9       snapshot-controller-7d9fbc56b8-f24q2        kube-system
	2a8876a52c7ae       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   32 seconds ago       Exited              create                                   0                   8f27ad33070ce       ingress-nginx-admission-create-l24wm        ingress-nginx
	403c426d75120       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           33 seconds ago       Running             registry                                 0                   38aeac4681dc2       registry-6b586f9694-xvmzk                   kube-system
	9bf784651f15b       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              34 seconds ago       Running             yakd                                     0                   0b2b2a6278449       yakd-dashboard-5ff678cb9-6fdjm              yakd-dashboard
	72e148959a3f5       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               37 seconds ago       Running             cloud-spanner-emulator                   0                   892af68365186       cloud-spanner-emulator-6f9fcf858b-bs44j     default
	8c332e600a86f       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               39 seconds ago       Running             minikube-ingress-dns                     0                   f8946f6cd2082       kube-ingress-dns-minikube                   kube-system
	6d09ceddae1c8       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             44 seconds ago       Running             local-path-provisioner                   0                   1896c514f17fd       local-path-provisioner-648f6765c9-xqxbg     local-path-storage
	e93137eb9f506       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        45 seconds ago       Running             metrics-server                           0                   4253ff216667f       metrics-server-85b7d694d7-992g6             kube-system
	befdac5dae601       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             46 seconds ago       Running             storage-provisioner                      0                   716919a1cc029       storage-provisioner                         kube-system
	62effce4d4405       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             46 seconds ago       Running             coredns                                  0                   4ae95db059286       coredns-66bc5c9577-lqlkm                    kube-system
	79692b5ce1377       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   cae308b8f8421       kube-proxy-8b626                            kube-system
	5af868c65929f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   5a66c977c864e       kindnet-qcnps                               kube-system
	5bb7efe058cec       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   26fc1815b1cc4       kube-apiserver-addons-762402                kube-system
	861090a7ec881       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   690bac2e18b57       kube-controller-manager-addons-762402       kube-system
	790087032ffe1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   353dede2d9cfd       kube-scheduler-addons-762402                kube-system
	09ed3ea084064       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   4c4c0e0ebbaec       etcd-addons-762402                          kube-system
	
	
	==> coredns [62effce4d440574a561ee8bb4713a143cff8c4dd5102c8837cbcb323a55c6f8e] <==
	[INFO] 10.244.0.16:54402 - 48578 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.004879505s
	[INFO] 10.244.0.16:40804 - 63258 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000069422s
	[INFO] 10.244.0.16:40804 - 62916 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000113707s
	[INFO] 10.244.0.16:55225 - 17705 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000074398s
	[INFO] 10.244.0.16:55225 - 17448 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000062173s
	[INFO] 10.244.0.16:55340 - 48416 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000076477s
	[INFO] 10.244.0.16:55340 - 48864 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000133431s
	[INFO] 10.244.0.16:36156 - 49381 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000079449s
	[INFO] 10.244.0.16:36156 - 49586 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00011129s
	[INFO] 10.244.0.22:47768 - 7358 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000182384s
	[INFO] 10.244.0.22:51502 - 56351 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000265752s
	[INFO] 10.244.0.22:58312 - 2157 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00010465s
	[INFO] 10.244.0.22:54899 - 24860 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000092574s
	[INFO] 10.244.0.22:44364 - 19746 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000135868s
	[INFO] 10.244.0.22:42023 - 28000 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000113668s
	[INFO] 10.244.0.22:56622 - 43296 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.002952523s
	[INFO] 10.244.0.22:43750 - 13151 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003298777s
	[INFO] 10.244.0.22:53627 - 30386 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004344315s
	[INFO] 10.244.0.22:42651 - 48373 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004433874s
	[INFO] 10.244.0.22:60321 - 9920 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004138559s
	[INFO] 10.244.0.22:34186 - 36428 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004200207s
	[INFO] 10.244.0.22:53566 - 18775 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00376843s
	[INFO] 10.244.0.22:50242 - 34733 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00536655s
	[INFO] 10.244.0.22:55551 - 40005 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000792326s
	[INFO] 10.244.0.22:38405 - 43085 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002139968s
	
	
	==> describe nodes <==
	Name:               addons-762402
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-762402
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=addons-762402
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T13_29_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-762402
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-762402"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:29:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-762402
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 13:31:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 13:31:00 +0000   Sun, 09 Nov 2025 13:29:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 13:31:00 +0000   Sun, 09 Nov 2025 13:29:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 13:31:00 +0000   Sun, 09 Nov 2025 13:29:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 13:31:00 +0000   Sun, 09 Nov 2025 13:30:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-762402
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                fefadf4c-cb63-48e2-9144-41b567f755ed
	  Boot ID:                    f61b57f8-a461-4dd6-b921-37a11bd88f0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  default                     cloud-spanner-emulator-6f9fcf858b-bs44j      0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  gadget                      gadget-d5mhg                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  gcp-auth                    gcp-auth-78565c9fb4-6bbn8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-6jkpc    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         87s
	  kube-system                 amd-gpu-device-plugin-8nlkf                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 coredns-66bc5c9577-lqlkm                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     89s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 csi-hostpathplugin-77pp6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 etcd-addons-762402                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         95s
	  kube-system                 kindnet-qcnps                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      89s
	  kube-system                 kube-apiserver-addons-762402                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-controller-manager-addons-762402        200m (2%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-proxy-8b626                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-scheduler-addons-762402                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 metrics-server-85b7d694d7-992g6              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         88s
	  kube-system                 nvidia-device-plugin-daemonset-rrlcz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 registry-6b586f9694-xvmzk                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 registry-creds-764b6fb674-2gshl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 registry-proxy-z7stg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 snapshot-controller-7d9fbc56b8-f24q2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 snapshot-controller-7d9fbc56b8-jcz8h         0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  local-path-storage          local-path-provisioner-648f6765c9-xqxbg      0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-6fdjm               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 87s   kube-proxy       
	  Normal  Starting                 95s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  95s   kubelet          Node addons-762402 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s   kubelet          Node addons-762402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s   kubelet          Node addons-762402 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           90s   node-controller  Node addons-762402 event: Registered Node addons-762402 in Controller
	  Normal  NodeReady                48s   kubelet          Node addons-762402 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 9 13:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000896] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000998] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.083011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.362229] i8042: Warning: Keylock active
	[  +0.010628] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.473113] block sda: the capability attribute has been deprecated.
	[  +0.090313] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.571631] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [09ed3ea08406406e4a9fa9f5b1bf6bfff6a0edd9133d05145b78a59503d5f47b] <==
	{"level":"warn","ts":"2025-11-09T13:29:26.053280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:26.058883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:26.064692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:26.071485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:26.077031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:26.083034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:26.088493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:26.105041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:26.110723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:26.116935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:26.164808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:36.866809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:29:36.873570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:03.541946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:03.561435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:03.567191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40432","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T13:30:28.529974Z","caller":"traceutil/trace.go:172","msg":"trace[175997618] linearizableReadLoop","detail":"{readStateIndex:1020; appliedIndex:1020; }","duration":"128.030434ms","start":"2025-11-09T13:30:28.401926Z","end":"2025-11-09T13:30:28.529956Z","steps":["trace[175997618] 'read index received'  (duration: 128.023461ms)","trace[175997618] 'applied index is now lower than readState.Index'  (duration: 6.023µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T13:30:28.530032Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.080167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T13:30:28.530101Z","caller":"traceutil/trace.go:172","msg":"trace[349352788] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:990; }","duration":"128.168336ms","start":"2025-11-09T13:30:28.401922Z","end":"2025-11-09T13:30:28.530090Z","steps":["trace[349352788] 'agreement among raft nodes before linearized reading'  (duration: 128.042868ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T13:30:28.530111Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.171568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T13:30:28.530113Z","caller":"traceutil/trace.go:172","msg":"trace[1618212384] transaction","detail":"{read_only:false; response_revision:991; number_of_response:1; }","duration":"139.418953ms","start":"2025-11-09T13:30:28.390675Z","end":"2025-11-09T13:30:28.530094Z","steps":["trace[1618212384] 'process raft request'  (duration: 139.308424ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:30:28.530147Z","caller":"traceutil/trace.go:172","msg":"trace[1625001800] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:991; }","duration":"128.217404ms","start":"2025-11-09T13:30:28.401922Z","end":"2025-11-09T13:30:28.530139Z","steps":["trace[1625001800] 'agreement among raft nodes before linearized reading'  (duration: 128.136663ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:30:41.627101Z","caller":"traceutil/trace.go:172","msg":"trace[1311191680] transaction","detail":"{read_only:false; response_revision:1117; number_of_response:1; }","duration":"105.220976ms","start":"2025-11-09T13:30:41.521866Z","end":"2025-11-09T13:30:41.627087Z","steps":["trace[1311191680] 'process raft request'  (duration: 105.118212ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T13:30:46.844726Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.887389ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T13:30:46.844807Z","caller":"traceutil/trace.go:172","msg":"trace[2034938259] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1140; }","duration":"138.980773ms","start":"2025-11-09T13:30:46.705810Z","end":"2025-11-09T13:30:46.844791Z","steps":["trace[2034938259] 'range keys from in-memory index tree'  (duration: 138.826943ms)"],"step_count":1}
	
	
	==> gcp-auth [8500f930e3f9ec473752bbd1560e45502716064b6e945d23ffb9fb4c8afffd3a] <==
	2025/11/09 13:30:49 GCP Auth Webhook started!
	2025/11/09 13:30:54 Ready to marshal response ...
	2025/11/09 13:30:54 Ready to write response ...
	2025/11/09 13:30:54 Ready to marshal response ...
	2025/11/09 13:30:54 Ready to write response ...
	2025/11/09 13:30:54 Ready to marshal response ...
	2025/11/09 13:30:54 Ready to write response ...
	
	
	==> kernel <==
	 13:31:03 up 13 min,  0 user,  load average: 2.67, 1.11, 0.42
	Linux addons-762402 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5af868c65929f995aa32fd9d978e8b01f76671c795d79d2fdd5e8b5636497bb7] <==
	I1109 13:29:35.262547       1 main.go:148] setting mtu 1500 for CNI 
	I1109 13:29:35.262685       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 13:29:35.262720       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T13:29:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 13:29:35.564460       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 13:29:35.564539       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 13:29:35.564553       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 13:29:35.564722       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1109 13:30:05.565021       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1109 13:30:05.565029       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1109 13:30:05.565061       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1109 13:30:05.565164       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1109 13:30:07.164716       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 13:30:07.164746       1 metrics.go:72] Registering metrics
	I1109 13:30:07.164795       1 controller.go:711] "Syncing nftables rules"
	I1109 13:30:15.564577       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:30:15.564626       1 main.go:301] handling current node
	I1109 13:30:25.564760       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:30:25.564803       1 main.go:301] handling current node
	I1109 13:30:35.564345       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:30:35.564390       1 main.go:301] handling current node
	I1109 13:30:45.563920       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:30:45.563963       1 main.go:301] handling current node
	I1109 13:30:55.564865       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:30:55.564915       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5bb7efe058cec23686f0732d66f5c759521ce0362ae0dac95c722fbe50e4ea21] <==
	I1109 13:29:42.652256       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.102.121.13"}
	W1109 13:30:03.541916       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1109 13:30:03.547872       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1109 13:30:03.561404       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1109 13:30:03.567143       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1109 13:30:15.827065       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.121.13:443: connect: connection refused
	E1109 13:30:15.827109       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.121.13:443: connect: connection refused" logger="UnhandledError"
	W1109 13:30:15.827064       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.121.13:443: connect: connection refused
	E1109 13:30:15.827488       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.121.13:443: connect: connection refused" logger="UnhandledError"
	W1109 13:30:15.849088       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.121.13:443: connect: connection refused
	E1109 13:30:15.849130       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.121.13:443: connect: connection refused" logger="UnhandledError"
	W1109 13:30:15.849624       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.121.13:443: connect: connection refused
	E1109 13:30:15.849685       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.121.13:443: connect: connection refused" logger="UnhandledError"
	E1109 13:30:18.980264       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.46.68:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.46.68:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.46.68:443: connect: connection refused" logger="UnhandledError"
	W1109 13:30:18.980390       1 handler_proxy.go:99] no RequestInfo found in the context
	E1109 13:30:18.980457       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1109 13:30:18.980885       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.46.68:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.46.68:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.46.68:443: connect: connection refused" logger="UnhandledError"
	E1109 13:30:18.985938       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.46.68:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.46.68:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.46.68:443: connect: connection refused" logger="UnhandledError"
	E1109 13:30:19.007081       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.46.68:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.46.68:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.46.68:443: connect: connection refused" logger="UnhandledError"
	I1109 13:30:19.084129       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1109 13:31:01.676407       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59066: use of closed network connection
	E1109 13:31:01.811287       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59088: use of closed network connection
	
	
	==> kube-controller-manager [861090a7ec88133657d8ba74914277d902ce94237d6ff1f5d419ebee604b79da] <==
	I1109 13:29:33.526584       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 13:29:33.526604       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 13:29:33.526680       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1109 13:29:33.527810       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 13:29:33.527849       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 13:29:33.530010       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 13:29:33.533168       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 13:29:33.533194       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 13:29:33.537382       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1109 13:29:33.542599       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1109 13:29:33.546837       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 13:29:33.546900       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 13:29:33.551119       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1109 13:29:33.559396       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 13:29:33.559407       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 13:29:33.559414       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1109 13:29:35.654664       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1109 13:30:03.536492       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1109 13:30:03.536625       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1109 13:30:03.536686       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1109 13:30:03.552937       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1109 13:30:03.556381       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1109 13:30:03.637224       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 13:30:03.657401       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 13:30:18.477057       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [79692b5ce1377fdc3438f677558138b88bf7d705bda4ddca28079953b176faf6] <==
	I1109 13:29:35.120379       1 server_linux.go:53] "Using iptables proxy"
	I1109 13:29:35.352824       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 13:29:35.454295       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 13:29:35.454330       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1109 13:29:35.454405       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 13:29:35.607435       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 13:29:35.607551       1 server_linux.go:132] "Using iptables Proxier"
	I1109 13:29:35.616363       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 13:29:35.625213       1 server.go:527] "Version info" version="v1.34.1"
	I1109 13:29:35.625503       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:29:35.631387       1 config.go:200] "Starting service config controller"
	I1109 13:29:35.631452       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 13:29:35.631477       1 config.go:106] "Starting endpoint slice config controller"
	I1109 13:29:35.631482       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 13:29:35.631489       1 config.go:309] "Starting node config controller"
	I1109 13:29:35.631495       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 13:29:35.631502       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 13:29:35.631496       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 13:29:35.631510       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 13:29:35.731895       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 13:29:35.732009       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 13:29:35.734897       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [790087032ffe1bf767edeef5915a65edb0e4e1d93fd292a3746fd7e1aeb43138] <==
	E1109 13:29:26.541671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 13:29:26.542586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 13:29:26.542740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 13:29:26.542788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 13:29:26.542816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 13:29:26.542930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 13:29:26.542933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 13:29:26.542987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 13:29:26.542982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 13:29:26.543012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 13:29:26.543049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 13:29:26.543072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 13:29:26.543078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 13:29:26.543154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 13:29:26.543255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 13:29:26.543341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 13:29:27.446276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 13:29:27.579577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 13:29:27.585474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 13:29:27.609950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 13:29:27.675591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 13:29:27.696659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 13:29:27.723487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 13:29:27.754544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1109 13:29:28.040830       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 13:30:38 addons-762402 kubelet[1292]: I1109 13:30:38.051567    1292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-rrlcz" podStartSLOduration=2.157210327 podStartE2EDuration="23.051550783s" podCreationTimestamp="2025-11-09 13:30:15 +0000 UTC" firstStartedPulling="2025-11-09 13:30:16.255851668 +0000 UTC m=+47.505501353" lastFinishedPulling="2025-11-09 13:30:37.150192125 +0000 UTC m=+68.399841809" observedRunningTime="2025-11-09 13:30:38.05103888 +0000 UTC m=+69.300688573" watchObservedRunningTime="2025-11-09 13:30:38.051550783 +0000 UTC m=+69.301200477"
	Nov 09 13:30:39 addons-762402 kubelet[1292]: I1109 13:30:39.047293    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-8nlkf" secret="" err="secret \"gcp-auth\" not found"
	Nov 09 13:30:39 addons-762402 kubelet[1292]: I1109 13:30:39.047382    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-rrlcz" secret="" err="secret \"gcp-auth\" not found"
	Nov 09 13:30:40 addons-762402 kubelet[1292]: I1109 13:30:40.018520    1292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-8nlkf" podStartSLOduration=3.065208463 podStartE2EDuration="25.018500947s" podCreationTimestamp="2025-11-09 13:30:15 +0000 UTC" firstStartedPulling="2025-11-09 13:30:16.262079138 +0000 UTC m=+47.511728821" lastFinishedPulling="2025-11-09 13:30:38.215371612 +0000 UTC m=+69.465021305" observedRunningTime="2025-11-09 13:30:39.056294393 +0000 UTC m=+70.305944086" watchObservedRunningTime="2025-11-09 13:30:40.018500947 +0000 UTC m=+71.268150640"
	Nov 09 13:30:40 addons-762402 kubelet[1292]: I1109 13:30:40.052532    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-8nlkf" secret="" err="secret \"gcp-auth\" not found"
	Nov 09 13:30:41 addons-762402 kubelet[1292]: I1109 13:30:41.062726    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-z7stg" secret="" err="secret \"gcp-auth\" not found"
	Nov 09 13:30:41 addons-762402 kubelet[1292]: I1109 13:30:41.076265    1292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-z7stg" podStartSLOduration=2.055955268 podStartE2EDuration="26.076245098s" podCreationTimestamp="2025-11-09 13:30:15 +0000 UTC" firstStartedPulling="2025-11-09 13:30:16.283075952 +0000 UTC m=+47.532725629" lastFinishedPulling="2025-11-09 13:30:40.303365787 +0000 UTC m=+71.553015459" observedRunningTime="2025-11-09 13:30:41.074635047 +0000 UTC m=+72.324284740" watchObservedRunningTime="2025-11-09 13:30:41.076245098 +0000 UTC m=+72.325894794"
	Nov 09 13:30:42 addons-762402 kubelet[1292]: I1109 13:30:42.064043    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-z7stg" secret="" err="secret \"gcp-auth\" not found"
	Nov 09 13:30:45 addons-762402 kubelet[1292]: I1109 13:30:45.088880    1292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-6jkpc" podStartSLOduration=56.587333836 podStartE2EDuration="1m9.08886032s" podCreationTimestamp="2025-11-09 13:29:36 +0000 UTC" firstStartedPulling="2025-11-09 13:30:31.808707517 +0000 UTC m=+63.058357202" lastFinishedPulling="2025-11-09 13:30:44.310234012 +0000 UTC m=+75.559883686" observedRunningTime="2025-11-09 13:30:45.088451034 +0000 UTC m=+76.338100728" watchObservedRunningTime="2025-11-09 13:30:45.08886032 +0000 UTC m=+76.338510015"
	Nov 09 13:30:47 addons-762402 kubelet[1292]: E1109 13:30:47.649329    1292 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 09 13:30:47 addons-762402 kubelet[1292]: E1109 13:30:47.649441    1292 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efb22ccb-8a7a-4184-a1eb-861b2390bf36-gcr-creds podName:efb22ccb-8a7a-4184-a1eb-861b2390bf36 nodeName:}" failed. No retries permitted until 2025-11-09 13:31:19.649418362 +0000 UTC m=+110.899068055 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/efb22ccb-8a7a-4184-a1eb-861b2390bf36-gcr-creds") pod "registry-creds-764b6fb674-2gshl" (UID: "efb22ccb-8a7a-4184-a1eb-861b2390bf36") : secret "registry-creds-gcr" not found
	Nov 09 13:30:48 addons-762402 kubelet[1292]: I1109 13:30:48.828824    1292 scope.go:117] "RemoveContainer" containerID="6cfec5bd1af2c9c0bfe661b8f7c02cb8bf5bac800097754616966d26189d3657"
	Nov 09 13:30:49 addons-762402 kubelet[1292]: I1109 13:30:49.877286    1292 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 09 13:30:49 addons-762402 kubelet[1292]: I1109 13:30:49.877318    1292 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 09 13:30:50 addons-762402 kubelet[1292]: I1109 13:30:50.108083    1292 scope.go:117] "RemoveContainer" containerID="6cfec5bd1af2c9c0bfe661b8f7c02cb8bf5bac800097754616966d26189d3657"
	Nov 09 13:30:50 addons-762402 kubelet[1292]: I1109 13:30:50.118941    1292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-6bbn8" podStartSLOduration=66.89268573 podStartE2EDuration="1m8.118924038s" podCreationTimestamp="2025-11-09 13:29:42 +0000 UTC" firstStartedPulling="2025-11-09 13:30:48.012929089 +0000 UTC m=+79.262578779" lastFinishedPulling="2025-11-09 13:30:49.239167396 +0000 UTC m=+80.488817087" observedRunningTime="2025-11-09 13:30:50.118225195 +0000 UTC m=+81.367874888" watchObservedRunningTime="2025-11-09 13:30:50.118924038 +0000 UTC m=+81.368573731"
	Nov 09 13:30:50 addons-762402 kubelet[1292]: I1109 13:30:50.119874    1292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-d5mhg" podStartSLOduration=68.101602981 podStartE2EDuration="1m15.119862268s" podCreationTimestamp="2025-11-09 13:29:35 +0000 UTC" firstStartedPulling="2025-11-09 13:30:40.296576255 +0000 UTC m=+71.546225942" lastFinishedPulling="2025-11-09 13:30:47.314835557 +0000 UTC m=+78.564485229" observedRunningTime="2025-11-09 13:30:48.101825368 +0000 UTC m=+79.351475060" watchObservedRunningTime="2025-11-09 13:30:50.119862268 +0000 UTC m=+81.369511961"
	Nov 09 13:30:51 addons-762402 kubelet[1292]: I1109 13:30:51.277306    1292 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cccmf\" (UniqueName: \"kubernetes.io/projected/9fdb6f91-a472-4e9e-9414-eff7db273a65-kube-api-access-cccmf\") pod \"9fdb6f91-a472-4e9e-9414-eff7db273a65\" (UID: \"9fdb6f91-a472-4e9e-9414-eff7db273a65\") "
	Nov 09 13:30:51 addons-762402 kubelet[1292]: I1109 13:30:51.279578    1292 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fdb6f91-a472-4e9e-9414-eff7db273a65-kube-api-access-cccmf" (OuterVolumeSpecName: "kube-api-access-cccmf") pod "9fdb6f91-a472-4e9e-9414-eff7db273a65" (UID: "9fdb6f91-a472-4e9e-9414-eff7db273a65"). InnerVolumeSpecName "kube-api-access-cccmf". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 09 13:30:51 addons-762402 kubelet[1292]: I1109 13:30:51.378548    1292 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cccmf\" (UniqueName: \"kubernetes.io/projected/9fdb6f91-a472-4e9e-9414-eff7db273a65-kube-api-access-cccmf\") on node \"addons-762402\" DevicePath \"\""
	Nov 09 13:30:52 addons-762402 kubelet[1292]: I1109 13:30:52.125931    1292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6973d9ea9ed08ffe035a896c893eabc221b978c2f121b8cafa2d248e3c68b035"
	Nov 09 13:30:52 addons-762402 kubelet[1292]: I1109 13:30:52.138656    1292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-77pp6" podStartSLOduration=1.543927854 podStartE2EDuration="37.138621224s" podCreationTimestamp="2025-11-09 13:30:15 +0000 UTC" firstStartedPulling="2025-11-09 13:30:16.262218515 +0000 UTC m=+47.511868201" lastFinishedPulling="2025-11-09 13:30:51.856911887 +0000 UTC m=+83.106561571" observedRunningTime="2025-11-09 13:30:52.13828004 +0000 UTC m=+83.387929736" watchObservedRunningTime="2025-11-09 13:30:52.138621224 +0000 UTC m=+83.388270916"
	Nov 09 13:30:54 addons-762402 kubelet[1292]: I1109 13:30:54.699105    1292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh46w\" (UniqueName: \"kubernetes.io/projected/69f0b611-7084-4d17-814a-0ed1e841dc08-kube-api-access-kh46w\") pod \"busybox\" (UID: \"69f0b611-7084-4d17-814a-0ed1e841dc08\") " pod="default/busybox"
	Nov 09 13:30:54 addons-762402 kubelet[1292]: I1109 13:30:54.699240    1292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/69f0b611-7084-4d17-814a-0ed1e841dc08-gcp-creds\") pod \"busybox\" (UID: \"69f0b611-7084-4d17-814a-0ed1e841dc08\") " pod="default/busybox"
	Nov 09 13:30:56 addons-762402 kubelet[1292]: I1109 13:30:56.157393    1292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.477079942 podStartE2EDuration="2.157374916s" podCreationTimestamp="2025-11-09 13:30:54 +0000 UTC" firstStartedPulling="2025-11-09 13:30:54.870141632 +0000 UTC m=+86.119791308" lastFinishedPulling="2025-11-09 13:30:55.550436594 +0000 UTC m=+86.800086282" observedRunningTime="2025-11-09 13:30:56.156300444 +0000 UTC m=+87.405950139" watchObservedRunningTime="2025-11-09 13:30:56.157374916 +0000 UTC m=+87.407024609"
	
	
	==> storage-provisioner [befdac5dae601c24846eafd867c1ffdaf970739dfdb6544290f7d401fb479adb] <==
	W1109 13:30:38.573158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:30:40.576140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:30:40.581560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:30:42.585045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:30:42.588316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:30:44.591573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:30:44.596723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:30:46.600411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:30:46.632887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:30:48.636586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:30:48.640385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:30:50.643011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:30:50.646227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:30:52.648350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:30:52.651509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:30:54.653456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:30:54.656707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:30:56.659143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:30:56.663387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:30:58.665342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:30:58.668411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:31:00.671250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:31:00.674223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:31:02.677818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:31:02.682705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-762402 -n addons-762402
helpers_test.go:269: (dbg) Run:  kubectl --context addons-762402 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: gcp-auth-certs-create-fswsp gcp-auth-certs-patch-ncnbg ingress-nginx-admission-create-l24wm ingress-nginx-admission-patch-f6fqd registry-creds-764b6fb674-2gshl
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-762402 describe pod gcp-auth-certs-create-fswsp gcp-auth-certs-patch-ncnbg ingress-nginx-admission-create-l24wm ingress-nginx-admission-patch-f6fqd registry-creds-764b6fb674-2gshl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-762402 describe pod gcp-auth-certs-create-fswsp gcp-auth-certs-patch-ncnbg ingress-nginx-admission-create-l24wm ingress-nginx-admission-patch-f6fqd registry-creds-764b6fb674-2gshl: exit status 1 (73.252464ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create-fswsp" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch-ncnbg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-l24wm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-f6fqd" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-2gshl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-762402 describe pod gcp-auth-certs-create-fswsp gcp-auth-certs-patch-ncnbg ingress-nginx-admission-create-l24wm ingress-nginx-admission-patch-f6fqd registry-creds-764b6fb674-2gshl: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-762402 addons disable headlamp --alsologtostderr -v=1: exit status 11 (227.566425ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:31:04.208348   19645 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:31:04.208477   19645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:04.208485   19645 out.go:374] Setting ErrFile to fd 2...
	I1109 13:31:04.208489   19645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:04.208658   19645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:31:04.208888   19645 mustload.go:66] Loading cluster: addons-762402
	I1109 13:31:04.209196   19645 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:04.209209   19645 addons.go:607] checking whether the cluster is paused
	I1109 13:31:04.209288   19645 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:04.209299   19645 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:31:04.209651   19645 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:31:04.227156   19645 ssh_runner.go:195] Run: systemctl --version
	I1109 13:31:04.227195   19645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:31:04.243854   19645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:31:04.334421   19645 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:31:04.334514   19645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:31:04.361557   19645 cri.go:89] found id: "af27443b9f8960f215197598df9b91ffdc8840cc6699e13907ef2947a7f00d7f"
	I1109 13:31:04.361590   19645 cri.go:89] found id: "faa104590cba322dc0a5b9cf53d37627283cf55d3c3e01f2df4160549024ba44"
	I1109 13:31:04.361594   19645 cri.go:89] found id: "4e995ec9dee843135f1c0d0f6013b3b611a0904bb3f4e1d9e3a32292ddec2484"
	I1109 13:31:04.361597   19645 cri.go:89] found id: "b930aa6a1203014deddb934b10a09a6c0fe15230dc625851f1f8af6995f0c88a"
	I1109 13:31:04.361599   19645 cri.go:89] found id: "d67da63b5ee916610fc4244764266a69e975404d12653c2d5c53579883b8f1c1"
	I1109 13:31:04.361603   19645 cri.go:89] found id: "0aaed23bb5d29a8452cd11f247803cb291c459fb6a6e4e209ef99d928f631144"
	I1109 13:31:04.361605   19645 cri.go:89] found id: "08f89d0732d380e6bf1e4981ab1ec47b039f172745c2906fcf611652aaf44a3c"
	I1109 13:31:04.361607   19645 cri.go:89] found id: "2afddde1486e2dfe821e2e23866ced6e5b833b9bc83463adad53a55d1941cd8e"
	I1109 13:31:04.361610   19645 cri.go:89] found id: "c2e8f6876246e31a51938f36acf9e6262ba61dc5526f4f4a5e8789cd851d46cf"
	I1109 13:31:04.361622   19645 cri.go:89] found id: "c947eeaaf49bfe6ebb73d36bcfba2b027085814dd9abc647f0e0692d4b65939e"
	I1109 13:31:04.361626   19645 cri.go:89] found id: "3c9fabcff63aa51893e804edf796bca0360c29c53e840636c0db34b4037bde26"
	I1109 13:31:04.361629   19645 cri.go:89] found id: "057fd0d666013bd4f0e02f4749164afa0a5d0906ec2754e1278f051e29dbe2aa"
	I1109 13:31:04.361649   19645 cri.go:89] found id: "5e49dd8732922347f346f66fdc4fee803771be2e5ef0d4d345d966b70ab95d61"
	I1109 13:31:04.361654   19645 cri.go:89] found id: "403c426d7512024ca15621249cc490aed2d29fd9c26fd69e34ee8a4865f6ae84"
	I1109 13:31:04.361660   19645 cri.go:89] found id: "8c332e600a86f5db4cbaefa708b6b131d75d12b1625fc751633649df36046ad6"
	I1109 13:31:04.361680   19645 cri.go:89] found id: "e93137eb9f50682474df733b717841dc665886c5322e88ebbcc5f7daeff9c7f6"
	I1109 13:31:04.361688   19645 cri.go:89] found id: "befdac5dae601c24846eafd867c1ffdaf970739dfdb6544290f7d401fb479adb"
	I1109 13:31:04.361692   19645 cri.go:89] found id: "62effce4d440574a561ee8bb4713a143cff8c4dd5102c8837cbcb323a55c6f8e"
	I1109 13:31:04.361695   19645 cri.go:89] found id: "79692b5ce1377fdc3438f677558138b88bf7d705bda4ddca28079953b176faf6"
	I1109 13:31:04.361697   19645 cri.go:89] found id: "5af868c65929f995aa32fd9d978e8b01f76671c795d79d2fdd5e8b5636497bb7"
	I1109 13:31:04.361702   19645 cri.go:89] found id: "5bb7efe058cec23686f0732d66f5c759521ce0362ae0dac95c722fbe50e4ea21"
	I1109 13:31:04.361705   19645 cri.go:89] found id: "861090a7ec88133657d8ba74914277d902ce94237d6ff1f5d419ebee604b79da"
	I1109 13:31:04.361707   19645 cri.go:89] found id: "790087032ffe1bf767edeef5915a65edb0e4e1d93fd292a3746fd7e1aeb43138"
	I1109 13:31:04.361709   19645 cri.go:89] found id: "09ed3ea08406406e4a9fa9f5b1bf6bfff6a0edd9133d05145b78a59503d5f47b"
	I1109 13:31:04.361711   19645 cri.go:89] found id: ""
	I1109 13:31:04.361758   19645 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:31:04.374261   19645 out.go:203] 
	W1109 13:31:04.375419   19645 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:31:04.375437   19645 out.go:285] * 
	* 
	W1109 13:31:04.378332   19645 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:31:04.379444   19645 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-762402 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.34s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-bs44j" [f924b48d-a6d0-4b39-bee7-9f8533ad63c0] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002672797s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-762402 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (232.499712ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:31:13.337990   21189 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:31:13.338152   21189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:13.338161   21189 out.go:374] Setting ErrFile to fd 2...
	I1109 13:31:13.338166   21189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:13.338374   21189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:31:13.338754   21189 mustload.go:66] Loading cluster: addons-762402
	I1109 13:31:13.339244   21189 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:13.339266   21189 addons.go:607] checking whether the cluster is paused
	I1109 13:31:13.339396   21189 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:13.339411   21189 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:31:13.339957   21189 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:31:13.357027   21189 ssh_runner.go:195] Run: systemctl --version
	I1109 13:31:13.357072   21189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:31:13.373027   21189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:31:13.462838   21189 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:31:13.462918   21189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:31:13.495069   21189 cri.go:89] found id: "af27443b9f8960f215197598df9b91ffdc8840cc6699e13907ef2947a7f00d7f"
	I1109 13:31:13.495115   21189 cri.go:89] found id: "faa104590cba322dc0a5b9cf53d37627283cf55d3c3e01f2df4160549024ba44"
	I1109 13:31:13.495123   21189 cri.go:89] found id: "4e995ec9dee843135f1c0d0f6013b3b611a0904bb3f4e1d9e3a32292ddec2484"
	I1109 13:31:13.495128   21189 cri.go:89] found id: "b930aa6a1203014deddb934b10a09a6c0fe15230dc625851f1f8af6995f0c88a"
	I1109 13:31:13.495132   21189 cri.go:89] found id: "d67da63b5ee916610fc4244764266a69e975404d12653c2d5c53579883b8f1c1"
	I1109 13:31:13.495138   21189 cri.go:89] found id: "0aaed23bb5d29a8452cd11f247803cb291c459fb6a6e4e209ef99d928f631144"
	I1109 13:31:13.495142   21189 cri.go:89] found id: "08f89d0732d380e6bf1e4981ab1ec47b039f172745c2906fcf611652aaf44a3c"
	I1109 13:31:13.495147   21189 cri.go:89] found id: "2afddde1486e2dfe821e2e23866ced6e5b833b9bc83463adad53a55d1941cd8e"
	I1109 13:31:13.495151   21189 cri.go:89] found id: "c2e8f6876246e31a51938f36acf9e6262ba61dc5526f4f4a5e8789cd851d46cf"
	I1109 13:31:13.495163   21189 cri.go:89] found id: "c947eeaaf49bfe6ebb73d36bcfba2b027085814dd9abc647f0e0692d4b65939e"
	I1109 13:31:13.495171   21189 cri.go:89] found id: "3c9fabcff63aa51893e804edf796bca0360c29c53e840636c0db34b4037bde26"
	I1109 13:31:13.495175   21189 cri.go:89] found id: "057fd0d666013bd4f0e02f4749164afa0a5d0906ec2754e1278f051e29dbe2aa"
	I1109 13:31:13.495178   21189 cri.go:89] found id: "5e49dd8732922347f346f66fdc4fee803771be2e5ef0d4d345d966b70ab95d61"
	I1109 13:31:13.495182   21189 cri.go:89] found id: "403c426d7512024ca15621249cc490aed2d29fd9c26fd69e34ee8a4865f6ae84"
	I1109 13:31:13.495186   21189 cri.go:89] found id: "8c332e600a86f5db4cbaefa708b6b131d75d12b1625fc751633649df36046ad6"
	I1109 13:31:13.495201   21189 cri.go:89] found id: "e93137eb9f50682474df733b717841dc665886c5322e88ebbcc5f7daeff9c7f6"
	I1109 13:31:13.495213   21189 cri.go:89] found id: "befdac5dae601c24846eafd867c1ffdaf970739dfdb6544290f7d401fb479adb"
	I1109 13:31:13.495218   21189 cri.go:89] found id: "62effce4d440574a561ee8bb4713a143cff8c4dd5102c8837cbcb323a55c6f8e"
	I1109 13:31:13.495222   21189 cri.go:89] found id: "79692b5ce1377fdc3438f677558138b88bf7d705bda4ddca28079953b176faf6"
	I1109 13:31:13.495225   21189 cri.go:89] found id: "5af868c65929f995aa32fd9d978e8b01f76671c795d79d2fdd5e8b5636497bb7"
	I1109 13:31:13.495232   21189 cri.go:89] found id: "5bb7efe058cec23686f0732d66f5c759521ce0362ae0dac95c722fbe50e4ea21"
	I1109 13:31:13.495236   21189 cri.go:89] found id: "861090a7ec88133657d8ba74914277d902ce94237d6ff1f5d419ebee604b79da"
	I1109 13:31:13.495240   21189 cri.go:89] found id: "790087032ffe1bf767edeef5915a65edb0e4e1d93fd292a3746fd7e1aeb43138"
	I1109 13:31:13.495244   21189 cri.go:89] found id: "09ed3ea08406406e4a9fa9f5b1bf6bfff6a0edd9133d05145b78a59503d5f47b"
	I1109 13:31:13.495247   21189 cri.go:89] found id: ""
	I1109 13:31:13.495300   21189 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:31:13.510095   21189 out.go:203] 
	W1109 13:31:13.511136   21189 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:31:13.511160   21189 out.go:285] * 
	* 
	W1109 13:31:13.514085   21189 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:31:13.515188   21189 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-762402 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.07s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-762402 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-762402 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-762402 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [4ce84896-3451-43b2-93c8-63d71ed6d53b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [4ce84896-3451-43b2-93c8-63d71ed6d53b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [4ce84896-3451-43b2-93c8-63d71ed6d53b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003342127s
addons_test.go:967: (dbg) Run:  kubectl --context addons-762402 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 ssh "cat /opt/local-path-provisioner/pvc-762784ac-7e30-4ec8-bec8-a2511c62cb32_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-762402 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-762402 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-762402 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (234.360715ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:31:12.278174   21071 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:31:12.278609   21071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:12.278624   21071 out.go:374] Setting ErrFile to fd 2...
	I1109 13:31:12.278632   21071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:12.279055   21071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:31:12.279588   21071 mustload.go:66] Loading cluster: addons-762402
	I1109 13:31:12.279962   21071 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:12.279981   21071 addons.go:607] checking whether the cluster is paused
	I1109 13:31:12.280075   21071 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:12.280090   21071 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:31:12.280444   21071 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:31:12.298115   21071 ssh_runner.go:195] Run: systemctl --version
	I1109 13:31:12.298162   21071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:31:12.315084   21071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:31:12.405735   21071 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:31:12.405823   21071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:31:12.432600   21071 cri.go:89] found id: "af27443b9f8960f215197598df9b91ffdc8840cc6699e13907ef2947a7f00d7f"
	I1109 13:31:12.432619   21071 cri.go:89] found id: "faa104590cba322dc0a5b9cf53d37627283cf55d3c3e01f2df4160549024ba44"
	I1109 13:31:12.432623   21071 cri.go:89] found id: "4e995ec9dee843135f1c0d0f6013b3b611a0904bb3f4e1d9e3a32292ddec2484"
	I1109 13:31:12.432626   21071 cri.go:89] found id: "b930aa6a1203014deddb934b10a09a6c0fe15230dc625851f1f8af6995f0c88a"
	I1109 13:31:12.432629   21071 cri.go:89] found id: "d67da63b5ee916610fc4244764266a69e975404d12653c2d5c53579883b8f1c1"
	I1109 13:31:12.432632   21071 cri.go:89] found id: "0aaed23bb5d29a8452cd11f247803cb291c459fb6a6e4e209ef99d928f631144"
	I1109 13:31:12.432634   21071 cri.go:89] found id: "08f89d0732d380e6bf1e4981ab1ec47b039f172745c2906fcf611652aaf44a3c"
	I1109 13:31:12.432637   21071 cri.go:89] found id: "2afddde1486e2dfe821e2e23866ced6e5b833b9bc83463adad53a55d1941cd8e"
	I1109 13:31:12.432654   21071 cri.go:89] found id: "c2e8f6876246e31a51938f36acf9e6262ba61dc5526f4f4a5e8789cd851d46cf"
	I1109 13:31:12.432664   21071 cri.go:89] found id: "c947eeaaf49bfe6ebb73d36bcfba2b027085814dd9abc647f0e0692d4b65939e"
	I1109 13:31:12.432668   21071 cri.go:89] found id: "3c9fabcff63aa51893e804edf796bca0360c29c53e840636c0db34b4037bde26"
	I1109 13:31:12.432673   21071 cri.go:89] found id: "057fd0d666013bd4f0e02f4749164afa0a5d0906ec2754e1278f051e29dbe2aa"
	I1109 13:31:12.432678   21071 cri.go:89] found id: "5e49dd8732922347f346f66fdc4fee803771be2e5ef0d4d345d966b70ab95d61"
	I1109 13:31:12.432683   21071 cri.go:89] found id: "403c426d7512024ca15621249cc490aed2d29fd9c26fd69e34ee8a4865f6ae84"
	I1109 13:31:12.432687   21071 cri.go:89] found id: "8c332e600a86f5db4cbaefa708b6b131d75d12b1625fc751633649df36046ad6"
	I1109 13:31:12.432695   21071 cri.go:89] found id: "e93137eb9f50682474df733b717841dc665886c5322e88ebbcc5f7daeff9c7f6"
	I1109 13:31:12.432700   21071 cri.go:89] found id: "befdac5dae601c24846eafd867c1ffdaf970739dfdb6544290f7d401fb479adb"
	I1109 13:31:12.432706   21071 cri.go:89] found id: "62effce4d440574a561ee8bb4713a143cff8c4dd5102c8837cbcb323a55c6f8e"
	I1109 13:31:12.432710   21071 cri.go:89] found id: "79692b5ce1377fdc3438f677558138b88bf7d705bda4ddca28079953b176faf6"
	I1109 13:31:12.432714   21071 cri.go:89] found id: "5af868c65929f995aa32fd9d978e8b01f76671c795d79d2fdd5e8b5636497bb7"
	I1109 13:31:12.432718   21071 cri.go:89] found id: "5bb7efe058cec23686f0732d66f5c759521ce0362ae0dac95c722fbe50e4ea21"
	I1109 13:31:12.432722   21071 cri.go:89] found id: "861090a7ec88133657d8ba74914277d902ce94237d6ff1f5d419ebee604b79da"
	I1109 13:31:12.432727   21071 cri.go:89] found id: "790087032ffe1bf767edeef5915a65edb0e4e1d93fd292a3746fd7e1aeb43138"
	I1109 13:31:12.432731   21071 cri.go:89] found id: "09ed3ea08406406e4a9fa9f5b1bf6bfff6a0edd9133d05145b78a59503d5f47b"
	I1109 13:31:12.432734   21071 cri.go:89] found id: ""
	I1109 13:31:12.432774   21071 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:31:12.445635   21071 out.go:203] 
	W1109 13:31:12.446632   21071 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:31:12.446670   21071 out.go:285] * 
	* 
	W1109 13:31:12.450320   21071 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:31:12.451531   21071 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-762402 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.07s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-rrlcz" [10946875-574c-4b8c-acb0-212811b4316d] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003250095s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-762402 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (232.979278ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:31:08.100075   19885 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:31:08.100392   19885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:08.100403   19885 out.go:374] Setting ErrFile to fd 2...
	I1109 13:31:08.100410   19885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:08.100622   19885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:31:08.100900   19885 mustload.go:66] Loading cluster: addons-762402
	I1109 13:31:08.101216   19885 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:08.101233   19885 addons.go:607] checking whether the cluster is paused
	I1109 13:31:08.101328   19885 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:08.101344   19885 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:31:08.101826   19885 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:31:08.121145   19885 ssh_runner.go:195] Run: systemctl --version
	I1109 13:31:08.121188   19885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:31:08.139415   19885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:31:08.231509   19885 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:31:08.231591   19885 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:31:08.258237   19885 cri.go:89] found id: "af27443b9f8960f215197598df9b91ffdc8840cc6699e13907ef2947a7f00d7f"
	I1109 13:31:08.258256   19885 cri.go:89] found id: "faa104590cba322dc0a5b9cf53d37627283cf55d3c3e01f2df4160549024ba44"
	I1109 13:31:08.258261   19885 cri.go:89] found id: "4e995ec9dee843135f1c0d0f6013b3b611a0904bb3f4e1d9e3a32292ddec2484"
	I1109 13:31:08.258266   19885 cri.go:89] found id: "b930aa6a1203014deddb934b10a09a6c0fe15230dc625851f1f8af6995f0c88a"
	I1109 13:31:08.258270   19885 cri.go:89] found id: "d67da63b5ee916610fc4244764266a69e975404d12653c2d5c53579883b8f1c1"
	I1109 13:31:08.258275   19885 cri.go:89] found id: "0aaed23bb5d29a8452cd11f247803cb291c459fb6a6e4e209ef99d928f631144"
	I1109 13:31:08.258278   19885 cri.go:89] found id: "08f89d0732d380e6bf1e4981ab1ec47b039f172745c2906fcf611652aaf44a3c"
	I1109 13:31:08.258283   19885 cri.go:89] found id: "2afddde1486e2dfe821e2e23866ced6e5b833b9bc83463adad53a55d1941cd8e"
	I1109 13:31:08.258288   19885 cri.go:89] found id: "c2e8f6876246e31a51938f36acf9e6262ba61dc5526f4f4a5e8789cd851d46cf"
	I1109 13:31:08.258294   19885 cri.go:89] found id: "c947eeaaf49bfe6ebb73d36bcfba2b027085814dd9abc647f0e0692d4b65939e"
	I1109 13:31:08.258299   19885 cri.go:89] found id: "3c9fabcff63aa51893e804edf796bca0360c29c53e840636c0db34b4037bde26"
	I1109 13:31:08.258304   19885 cri.go:89] found id: "057fd0d666013bd4f0e02f4749164afa0a5d0906ec2754e1278f051e29dbe2aa"
	I1109 13:31:08.258308   19885 cri.go:89] found id: "5e49dd8732922347f346f66fdc4fee803771be2e5ef0d4d345d966b70ab95d61"
	I1109 13:31:08.258313   19885 cri.go:89] found id: "403c426d7512024ca15621249cc490aed2d29fd9c26fd69e34ee8a4865f6ae84"
	I1109 13:31:08.258322   19885 cri.go:89] found id: "8c332e600a86f5db4cbaefa708b6b131d75d12b1625fc751633649df36046ad6"
	I1109 13:31:08.258334   19885 cri.go:89] found id: "e93137eb9f50682474df733b717841dc665886c5322e88ebbcc5f7daeff9c7f6"
	I1109 13:31:08.258341   19885 cri.go:89] found id: "befdac5dae601c24846eafd867c1ffdaf970739dfdb6544290f7d401fb479adb"
	I1109 13:31:08.258346   19885 cri.go:89] found id: "62effce4d440574a561ee8bb4713a143cff8c4dd5102c8837cbcb323a55c6f8e"
	I1109 13:31:08.258349   19885 cri.go:89] found id: "79692b5ce1377fdc3438f677558138b88bf7d705bda4ddca28079953b176faf6"
	I1109 13:31:08.258351   19885 cri.go:89] found id: "5af868c65929f995aa32fd9d978e8b01f76671c795d79d2fdd5e8b5636497bb7"
	I1109 13:31:08.258353   19885 cri.go:89] found id: "5bb7efe058cec23686f0732d66f5c759521ce0362ae0dac95c722fbe50e4ea21"
	I1109 13:31:08.258356   19885 cri.go:89] found id: "861090a7ec88133657d8ba74914277d902ce94237d6ff1f5d419ebee604b79da"
	I1109 13:31:08.258358   19885 cri.go:89] found id: "790087032ffe1bf767edeef5915a65edb0e4e1d93fd292a3746fd7e1aeb43138"
	I1109 13:31:08.258360   19885 cri.go:89] found id: "09ed3ea08406406e4a9fa9f5b1bf6bfff6a0edd9133d05145b78a59503d5f47b"
	I1109 13:31:08.258362   19885 cri.go:89] found id: ""
	I1109 13:31:08.258393   19885 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:31:08.270997   19885 out.go:203] 
	W1109 13:31:08.272091   19885 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:31:08.272109   19885 out.go:285] * 
	* 
	W1109 13:31:08.275003   19885 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:31:08.275976   19885 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-762402 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.24s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-6fdjm" [9b0294c7-9ea5-475d-90f8-d539dec5b215] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003081572s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-762402 addons disable yakd --alsologtostderr -v=1: exit status 11 (227.809322ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:31:22.746827   22092 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:31:22.747108   22092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:22.747117   22092 out.go:374] Setting ErrFile to fd 2...
	I1109 13:31:22.747121   22092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:22.747404   22092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:31:22.747735   22092 mustload.go:66] Loading cluster: addons-762402
	I1109 13:31:22.748058   22092 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:22.748074   22092 addons.go:607] checking whether the cluster is paused
	I1109 13:31:22.748169   22092 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:22.748183   22092 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:31:22.748545   22092 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:31:22.767122   22092 ssh_runner.go:195] Run: systemctl --version
	I1109 13:31:22.767168   22092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:31:22.784774   22092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:31:22.875917   22092 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:31:22.875986   22092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:31:22.902564   22092 cri.go:89] found id: "d7738655acb84c4efbc4f35b8b5c93ff7d6577537b16dfabb7e9f5b6db09ef0d"
	I1109 13:31:22.902590   22092 cri.go:89] found id: "af27443b9f8960f215197598df9b91ffdc8840cc6699e13907ef2947a7f00d7f"
	I1109 13:31:22.902595   22092 cri.go:89] found id: "faa104590cba322dc0a5b9cf53d37627283cf55d3c3e01f2df4160549024ba44"
	I1109 13:31:22.902598   22092 cri.go:89] found id: "4e995ec9dee843135f1c0d0f6013b3b611a0904bb3f4e1d9e3a32292ddec2484"
	I1109 13:31:22.902603   22092 cri.go:89] found id: "b930aa6a1203014deddb934b10a09a6c0fe15230dc625851f1f8af6995f0c88a"
	I1109 13:31:22.902607   22092 cri.go:89] found id: "d67da63b5ee916610fc4244764266a69e975404d12653c2d5c53579883b8f1c1"
	I1109 13:31:22.902610   22092 cri.go:89] found id: "0aaed23bb5d29a8452cd11f247803cb291c459fb6a6e4e209ef99d928f631144"
	I1109 13:31:22.902612   22092 cri.go:89] found id: "08f89d0732d380e6bf1e4981ab1ec47b039f172745c2906fcf611652aaf44a3c"
	I1109 13:31:22.902614   22092 cri.go:89] found id: "2afddde1486e2dfe821e2e23866ced6e5b833b9bc83463adad53a55d1941cd8e"
	I1109 13:31:22.902619   22092 cri.go:89] found id: "c2e8f6876246e31a51938f36acf9e6262ba61dc5526f4f4a5e8789cd851d46cf"
	I1109 13:31:22.902622   22092 cri.go:89] found id: "c947eeaaf49bfe6ebb73d36bcfba2b027085814dd9abc647f0e0692d4b65939e"
	I1109 13:31:22.902624   22092 cri.go:89] found id: "3c9fabcff63aa51893e804edf796bca0360c29c53e840636c0db34b4037bde26"
	I1109 13:31:22.902627   22092 cri.go:89] found id: "057fd0d666013bd4f0e02f4749164afa0a5d0906ec2754e1278f051e29dbe2aa"
	I1109 13:31:22.902630   22092 cri.go:89] found id: "5e49dd8732922347f346f66fdc4fee803771be2e5ef0d4d345d966b70ab95d61"
	I1109 13:31:22.902632   22092 cri.go:89] found id: "403c426d7512024ca15621249cc490aed2d29fd9c26fd69e34ee8a4865f6ae84"
	I1109 13:31:22.902637   22092 cri.go:89] found id: "8c332e600a86f5db4cbaefa708b6b131d75d12b1625fc751633649df36046ad6"
	I1109 13:31:22.902665   22092 cri.go:89] found id: "e93137eb9f50682474df733b717841dc665886c5322e88ebbcc5f7daeff9c7f6"
	I1109 13:31:22.902671   22092 cri.go:89] found id: "befdac5dae601c24846eafd867c1ffdaf970739dfdb6544290f7d401fb479adb"
	I1109 13:31:22.902675   22092 cri.go:89] found id: "62effce4d440574a561ee8bb4713a143cff8c4dd5102c8837cbcb323a55c6f8e"
	I1109 13:31:22.902679   22092 cri.go:89] found id: "79692b5ce1377fdc3438f677558138b88bf7d705bda4ddca28079953b176faf6"
	I1109 13:31:22.902684   22092 cri.go:89] found id: "5af868c65929f995aa32fd9d978e8b01f76671c795d79d2fdd5e8b5636497bb7"
	I1109 13:31:22.902688   22092 cri.go:89] found id: "5bb7efe058cec23686f0732d66f5c759521ce0362ae0dac95c722fbe50e4ea21"
	I1109 13:31:22.902692   22092 cri.go:89] found id: "861090a7ec88133657d8ba74914277d902ce94237d6ff1f5d419ebee604b79da"
	I1109 13:31:22.902695   22092 cri.go:89] found id: "790087032ffe1bf767edeef5915a65edb0e4e1d93fd292a3746fd7e1aeb43138"
	I1109 13:31:22.902698   22092 cri.go:89] found id: "09ed3ea08406406e4a9fa9f5b1bf6bfff6a0edd9133d05145b78a59503d5f47b"
	I1109 13:31:22.902701   22092 cri.go:89] found id: ""
	I1109 13:31:22.902744   22092 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:31:22.915795   22092 out.go:203] 
	W1109 13:31:22.916821   22092 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:31:22.916838   22092 out.go:285] * 
	* 
	W1109 13:31:22.919778   22092 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:31:22.920763   22092 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-762402 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.23s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-8nlkf" [c0515672-7870-4ffa-ade6-67e738f1ba34] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.002965296s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-762402 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-762402 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (231.857797ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:31:21.089065   21995 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:31:21.089326   21995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:21.089337   21995 out.go:374] Setting ErrFile to fd 2...
	I1109 13:31:21.089341   21995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:31:21.089510   21995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:31:21.089768   21995 mustload.go:66] Loading cluster: addons-762402
	I1109 13:31:21.090088   21995 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:21.090102   21995 addons.go:607] checking whether the cluster is paused
	I1109 13:31:21.090180   21995 config.go:182] Loaded profile config "addons-762402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:31:21.090190   21995 host.go:66] Checking if "addons-762402" exists ...
	I1109 13:31:21.090516   21995 cli_runner.go:164] Run: docker container inspect addons-762402 --format={{.State.Status}}
	I1109 13:31:21.109003   21995 ssh_runner.go:195] Run: systemctl --version
	I1109 13:31:21.109051   21995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-762402
	I1109 13:31:21.126806   21995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/addons-762402/id_rsa Username:docker}
	I1109 13:31:21.219581   21995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:31:21.219692   21995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:31:21.247237   21995 cri.go:89] found id: "d7738655acb84c4efbc4f35b8b5c93ff7d6577537b16dfabb7e9f5b6db09ef0d"
	I1109 13:31:21.247264   21995 cri.go:89] found id: "af27443b9f8960f215197598df9b91ffdc8840cc6699e13907ef2947a7f00d7f"
	I1109 13:31:21.247268   21995 cri.go:89] found id: "faa104590cba322dc0a5b9cf53d37627283cf55d3c3e01f2df4160549024ba44"
	I1109 13:31:21.247273   21995 cri.go:89] found id: "4e995ec9dee843135f1c0d0f6013b3b611a0904bb3f4e1d9e3a32292ddec2484"
	I1109 13:31:21.247275   21995 cri.go:89] found id: "b930aa6a1203014deddb934b10a09a6c0fe15230dc625851f1f8af6995f0c88a"
	I1109 13:31:21.247279   21995 cri.go:89] found id: "d67da63b5ee916610fc4244764266a69e975404d12653c2d5c53579883b8f1c1"
	I1109 13:31:21.247281   21995 cri.go:89] found id: "0aaed23bb5d29a8452cd11f247803cb291c459fb6a6e4e209ef99d928f631144"
	I1109 13:31:21.247284   21995 cri.go:89] found id: "08f89d0732d380e6bf1e4981ab1ec47b039f172745c2906fcf611652aaf44a3c"
	I1109 13:31:21.247286   21995 cri.go:89] found id: "2afddde1486e2dfe821e2e23866ced6e5b833b9bc83463adad53a55d1941cd8e"
	I1109 13:31:21.247301   21995 cri.go:89] found id: "c2e8f6876246e31a51938f36acf9e6262ba61dc5526f4f4a5e8789cd851d46cf"
	I1109 13:31:21.247307   21995 cri.go:89] found id: "c947eeaaf49bfe6ebb73d36bcfba2b027085814dd9abc647f0e0692d4b65939e"
	I1109 13:31:21.247309   21995 cri.go:89] found id: "3c9fabcff63aa51893e804edf796bca0360c29c53e840636c0db34b4037bde26"
	I1109 13:31:21.247312   21995 cri.go:89] found id: "057fd0d666013bd4f0e02f4749164afa0a5d0906ec2754e1278f051e29dbe2aa"
	I1109 13:31:21.247314   21995 cri.go:89] found id: "5e49dd8732922347f346f66fdc4fee803771be2e5ef0d4d345d966b70ab95d61"
	I1109 13:31:21.247317   21995 cri.go:89] found id: "403c426d7512024ca15621249cc490aed2d29fd9c26fd69e34ee8a4865f6ae84"
	I1109 13:31:21.247326   21995 cri.go:89] found id: "8c332e600a86f5db4cbaefa708b6b131d75d12b1625fc751633649df36046ad6"
	I1109 13:31:21.247332   21995 cri.go:89] found id: "e93137eb9f50682474df733b717841dc665886c5322e88ebbcc5f7daeff9c7f6"
	I1109 13:31:21.247336   21995 cri.go:89] found id: "befdac5dae601c24846eafd867c1ffdaf970739dfdb6544290f7d401fb479adb"
	I1109 13:31:21.247339   21995 cri.go:89] found id: "62effce4d440574a561ee8bb4713a143cff8c4dd5102c8837cbcb323a55c6f8e"
	I1109 13:31:21.247341   21995 cri.go:89] found id: "79692b5ce1377fdc3438f677558138b88bf7d705bda4ddca28079953b176faf6"
	I1109 13:31:21.247343   21995 cri.go:89] found id: "5af868c65929f995aa32fd9d978e8b01f76671c795d79d2fdd5e8b5636497bb7"
	I1109 13:31:21.247346   21995 cri.go:89] found id: "5bb7efe058cec23686f0732d66f5c759521ce0362ae0dac95c722fbe50e4ea21"
	I1109 13:31:21.247348   21995 cri.go:89] found id: "861090a7ec88133657d8ba74914277d902ce94237d6ff1f5d419ebee604b79da"
	I1109 13:31:21.247350   21995 cri.go:89] found id: "790087032ffe1bf767edeef5915a65edb0e4e1d93fd292a3746fd7e1aeb43138"
	I1109 13:31:21.247353   21995 cri.go:89] found id: "09ed3ea08406406e4a9fa9f5b1bf6bfff6a0edd9133d05145b78a59503d5f47b"
	I1109 13:31:21.247355   21995 cri.go:89] found id: ""
	I1109 13:31:21.247397   21995 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:31:21.260899   21995 out.go:203] 
	W1109 13:31:21.261940   21995 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:31:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:31:21.261961   21995 out.go:285] * 
	* 
	W1109 13:31:21.265261   21995 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:31:21.266620   21995 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-762402 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-630518 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-630518 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-hh4m6" [08ba367e-f149-49b8-beb8-78d6727fc499] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-630518 -n functional-630518
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-09 13:46:34.289402608 +0000 UTC m=+1073.025889651
functional_test.go:1645: (dbg) Run:  kubectl --context functional-630518 describe po hello-node-connect-7d85dfc575-hh4m6 -n default
functional_test.go:1645: (dbg) kubectl --context functional-630518 describe po hello-node-connect-7d85dfc575-hh4m6 -n default:
Name:             hello-node-connect-7d85dfc575-hh4m6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-630518/192.168.49.2
Start Time:       Sun, 09 Nov 2025 13:36:33 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4xrkz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-4xrkz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hh4m6 to functional-630518
Normal   Pulling    7m2s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m2s (x5 over 9m58s)    kubelet            Error: ErrImagePull
Warning  Failed     4m54s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m40s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-630518 logs hello-node-connect-7d85dfc575-hh4m6 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-630518 logs hello-node-connect-7d85dfc575-hh4m6 -n default: exit status 1 (63.75073ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-hh4m6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-630518 logs hello-node-connect-7d85dfc575-hh4m6 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-630518 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-hh4m6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-630518/192.168.49.2
Start Time:       Sun, 09 Nov 2025 13:36:33 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4xrkz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-4xrkz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hh4m6 to functional-630518
Normal   Pulling    7m2s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m2s (x5 over 9m58s)    kubelet            Error: ErrImagePull
Warning  Failed     4m54s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m40s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-630518 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-630518 logs -l app=hello-node-connect: exit status 1 (55.311496ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-hh4m6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-630518 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-630518 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.208.61
IPs:                      10.108.208.61
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30449/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-630518
helpers_test.go:243: (dbg) docker inspect functional-630518:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cae1c2d559159ca7305058dc359e3b1bd55e8f76710e55ddecbf023062092b03",
	        "Created": "2025-11-09T13:34:46.326918789Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33064,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T13:34:46.357572576Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/cae1c2d559159ca7305058dc359e3b1bd55e8f76710e55ddecbf023062092b03/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cae1c2d559159ca7305058dc359e3b1bd55e8f76710e55ddecbf023062092b03/hostname",
	        "HostsPath": "/var/lib/docker/containers/cae1c2d559159ca7305058dc359e3b1bd55e8f76710e55ddecbf023062092b03/hosts",
	        "LogPath": "/var/lib/docker/containers/cae1c2d559159ca7305058dc359e3b1bd55e8f76710e55ddecbf023062092b03/cae1c2d559159ca7305058dc359e3b1bd55e8f76710e55ddecbf023062092b03-json.log",
	        "Name": "/functional-630518",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-630518:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-630518",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cae1c2d559159ca7305058dc359e3b1bd55e8f76710e55ddecbf023062092b03",
	                "LowerDir": "/var/lib/docker/overlay2/aaec34e620b2fc73bd8f86db3f7ab5cab6f6b77f28a96c46c3ead1fe0c0d3494-init/diff:/var/lib/docker/overlay2/d0c6d4b4ffbfb6af64ec3408030fc54111fe30d500088a5ae947d598cfc72f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aaec34e620b2fc73bd8f86db3f7ab5cab6f6b77f28a96c46c3ead1fe0c0d3494/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aaec34e620b2fc73bd8f86db3f7ab5cab6f6b77f28a96c46c3ead1fe0c0d3494/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aaec34e620b2fc73bd8f86db3f7ab5cab6f6b77f28a96c46c3ead1fe0c0d3494/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-630518",
	                "Source": "/var/lib/docker/volumes/functional-630518/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-630518",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-630518",
	                "name.minikube.sigs.k8s.io": "functional-630518",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b050d14e934441f9486f7aca63ae33d3638899562a3f90c80c8aa8aabd141b18",
	            "SandboxKey": "/var/run/docker/netns/b050d14e9344",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-630518": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:47:07:9f:d0:5f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "04c6eb5f3b199a0ca1064749daf76e87ba4d619c818b984ab06ef748d31f82b9",
	                    "EndpointID": "b69ed8420d15062ebd152a1281d2a0775627387d9b1c902890eec57a0a9866af",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-630518",
	                        "cae1c2d55915"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-630518 -n functional-630518
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-630518 logs -n 25: (1.147289628s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-630518 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │ 09 Nov 25 13:36 UTC │
	│ ssh            │ functional-630518 ssh -- ls -la /mount-9p                                                                          │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │ 09 Nov 25 13:36 UTC │
	│ ssh            │ functional-630518 ssh sudo umount -f /mount-9p                                                                     │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │                     │
	│ mount          │ -p functional-630518 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1739493021/001:/mount1 --alsologtostderr -v=1 │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │                     │
	│ mount          │ -p functional-630518 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1739493021/001:/mount2 --alsologtostderr -v=1 │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │                     │
	│ ssh            │ functional-630518 ssh findmnt -T /mount1                                                                           │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │                     │
	│ mount          │ -p functional-630518 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1739493021/001:/mount3 --alsologtostderr -v=1 │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │                     │
	│ start          │ -p functional-630518 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │                     │
	│ start          │ -p functional-630518 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │                     │
	│ start          │ -p functional-630518 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │                     │
	│ ssh            │ functional-630518 ssh findmnt -T /mount1                                                                           │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │ 09 Nov 25 13:36 UTC │
	│ dashboard      │ --url --port 36195 -p functional-630518 --alsologtostderr -v=1                                                     │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │ 09 Nov 25 13:36 UTC │
	│ ssh            │ functional-630518 ssh findmnt -T /mount2                                                                           │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │ 09 Nov 25 13:36 UTC │
	│ ssh            │ functional-630518 ssh findmnt -T /mount3                                                                           │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │ 09 Nov 25 13:36 UTC │
	│ mount          │ -p functional-630518 --kill=true                                                                                   │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │                     │
	│ update-context │ functional-630518 update-context --alsologtostderr -v=2                                                            │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │ 09 Nov 25 13:36 UTC │
	│ update-context │ functional-630518 update-context --alsologtostderr -v=2                                                            │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │ 09 Nov 25 13:36 UTC │
	│ update-context │ functional-630518 update-context --alsologtostderr -v=2                                                            │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │ 09 Nov 25 13:36 UTC │
	│ image          │ functional-630518 image ls --format short --alsologtostderr                                                        │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │ 09 Nov 25 13:36 UTC │
	│ ssh            │ functional-630518 ssh pgrep buildkitd                                                                              │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │                     │
	│ image          │ functional-630518 image build -t localhost/my-image:functional-630518 testdata/build --alsologtostderr             │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │ 09 Nov 25 13:36 UTC │
	│ image          │ functional-630518 image ls                                                                                         │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │ 09 Nov 25 13:36 UTC │
	│ image          │ functional-630518 image ls --format yaml --alsologtostderr                                                         │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │ 09 Nov 25 13:36 UTC │
	│ image          │ functional-630518 image ls --format json --alsologtostderr                                                         │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │ 09 Nov 25 13:36 UTC │
	│ image          │ functional-630518 image ls --format table --alsologtostderr                                                        │ functional-630518 │ jenkins │ v1.37.0 │ 09 Nov 25 13:36 UTC │ 09 Nov 25 13:36 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:36:50
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:36:50.144855   47994 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:36:50.144981   47994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:36:50.144993   47994 out.go:374] Setting ErrFile to fd 2...
	I1109 13:36:50.144999   47994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:36:50.145319   47994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:36:50.145898   47994 out.go:368] Setting JSON to false
	I1109 13:36:50.147321   47994 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1160,"bootTime":1762694250,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 13:36:50.147441   47994 start.go:143] virtualization: kvm guest
	I1109 13:36:50.149278   47994 out.go:179] * [functional-630518] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 13:36:50.150655   47994 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:36:50.150648   47994 notify.go:221] Checking for updates...
	I1109 13:36:50.151864   47994 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:36:50.153032   47994 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 13:36:50.154197   47994 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 13:36:50.155242   47994 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 13:36:50.156156   47994 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:36:50.157403   47994 config.go:182] Loaded profile config "functional-630518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:36:50.157907   47994 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:36:50.183771   47994 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 13:36:50.183917   47994 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:36:50.247881   47994 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-09 13:36:50.237035979 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 13:36:50.248020   47994 docker.go:319] overlay module found
	I1109 13:36:50.249514   47994 out.go:179] * Using the docker driver based on existing profile
	I1109 13:36:50.250619   47994 start.go:309] selected driver: docker
	I1109 13:36:50.250658   47994 start.go:930] validating driver "docker" against &{Name:functional-630518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-630518 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:36:50.250752   47994 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:36:50.250842   47994 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:36:50.307021   47994 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-09 13:36:50.297377645 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 13:36:50.307788   47994 cni.go:84] Creating CNI manager for ""
	I1109 13:36:50.307843   47994 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 13:36:50.307883   47994 start.go:353] cluster config:
	{Name:functional-630518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-630518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:36:50.309357   47994 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 09 13:36:55 functional-630518 crio[3580]: time="2025-11-09T13:36:55.789559362Z" level=info msg="Starting container: 71621c642b26e839af619c182461374152224c50baecf21231c21381fb6fa85a" id=c9b4637d-67f7-4573-9793-0d84bac02c39 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 13:36:55 functional-630518 crio[3580]: time="2025-11-09T13:36:55.791511086Z" level=info msg="Started container" PID=8002 containerID=71621c642b26e839af619c182461374152224c50baecf21231c21381fb6fa85a description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-6ztkx/dashboard-metrics-scraper id=c9b4637d-67f7-4573-9793-0d84bac02c39 name=/runtime.v1.RuntimeService/StartContainer sandboxID=829f25c7a6c404a0c07600f9039948ce5a49c244d61376593cb5232057bdd9b2
	Nov 09 13:36:59 functional-630518 crio[3580]: time="2025-11-09T13:36:59.392473955Z" level=info msg="Stopping pod sandbox: 1d96443ab7b5b40a2ae2532b719d66ee33d1a6f94bb001fdc83e034398bff75b" id=9d53350d-ecd5-4e98-9489-a7b1a039b572 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 09 13:36:59 functional-630518 crio[3580]: time="2025-11-09T13:36:59.392527538Z" level=info msg="Stopped pod sandbox (already stopped): 1d96443ab7b5b40a2ae2532b719d66ee33d1a6f94bb001fdc83e034398bff75b" id=9d53350d-ecd5-4e98-9489-a7b1a039b572 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 09 13:36:59 functional-630518 crio[3580]: time="2025-11-09T13:36:59.392944022Z" level=info msg="Removing pod sandbox: 1d96443ab7b5b40a2ae2532b719d66ee33d1a6f94bb001fdc83e034398bff75b" id=b488142f-fe04-4f25-8ef0-00235f1c318c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 09 13:36:59 functional-630518 crio[3580]: time="2025-11-09T13:36:59.395278687Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 09 13:36:59 functional-630518 crio[3580]: time="2025-11-09T13:36:59.395341333Z" level=info msg="Removed pod sandbox: 1d96443ab7b5b40a2ae2532b719d66ee33d1a6f94bb001fdc83e034398bff75b" id=b488142f-fe04-4f25-8ef0-00235f1c318c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 09 13:36:59 functional-630518 crio[3580]: time="2025-11-09T13:36:59.395798178Z" level=info msg="Stopping pod sandbox: 69814c2e0ef439c0c44fb8cbd3bd7979e0e23f1dc79b90cbbb9792a7564da58e" id=ffc0197a-a9bf-4b5e-838e-4a2597fb557c name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 09 13:36:59 functional-630518 crio[3580]: time="2025-11-09T13:36:59.39584579Z" level=info msg="Stopped pod sandbox (already stopped): 69814c2e0ef439c0c44fb8cbd3bd7979e0e23f1dc79b90cbbb9792a7564da58e" id=ffc0197a-a9bf-4b5e-838e-4a2597fb557c name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 09 13:36:59 functional-630518 crio[3580]: time="2025-11-09T13:36:59.396156626Z" level=info msg="Removing pod sandbox: 69814c2e0ef439c0c44fb8cbd3bd7979e0e23f1dc79b90cbbb9792a7564da58e" id=54c634e1-24fe-47c1-9131-dc803d121e5c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 09 13:36:59 functional-630518 crio[3580]: time="2025-11-09T13:36:59.398487324Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 09 13:36:59 functional-630518 crio[3580]: time="2025-11-09T13:36:59.398541907Z" level=info msg="Removed pod sandbox: 69814c2e0ef439c0c44fb8cbd3bd7979e0e23f1dc79b90cbbb9792a7564da58e" id=54c634e1-24fe-47c1-9131-dc803d121e5c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 09 13:36:59 functional-630518 crio[3580]: time="2025-11-09T13:36:59.398830976Z" level=info msg="Stopping pod sandbox: 36f5bfa96ea0c9dcaa76595c60cb3d61a93661762d44e50a6feaf2e83e44055a" id=b07e43e3-953c-48c8-befd-bd73712a598d name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 09 13:36:59 functional-630518 crio[3580]: time="2025-11-09T13:36:59.398877916Z" level=info msg="Stopped pod sandbox (already stopped): 36f5bfa96ea0c9dcaa76595c60cb3d61a93661762d44e50a6feaf2e83e44055a" id=b07e43e3-953c-48c8-befd-bd73712a598d name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 09 13:36:59 functional-630518 crio[3580]: time="2025-11-09T13:36:59.399142167Z" level=info msg="Removing pod sandbox: 36f5bfa96ea0c9dcaa76595c60cb3d61a93661762d44e50a6feaf2e83e44055a" id=1836faa1-3c92-4969-81e0-27020b8eb3f6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 09 13:36:59 functional-630518 crio[3580]: time="2025-11-09T13:36:59.401496637Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 09 13:36:59 functional-630518 crio[3580]: time="2025-11-09T13:36:59.401545107Z" level=info msg="Removed pod sandbox: 36f5bfa96ea0c9dcaa76595c60cb3d61a93661762d44e50a6feaf2e83e44055a" id=1836faa1-3c92-4969-81e0-27020b8eb3f6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 09 13:37:17 functional-630518 crio[3580]: time="2025-11-09T13:37:17.405299717Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5d7d7a3a-9e45-4b9a-9e7a-c37ac89e6a49 name=/runtime.v1.ImageService/PullImage
	Nov 09 13:37:23 functional-630518 crio[3580]: time="2025-11-09T13:37:23.405082686Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ddd8956e-864a-4760-8362-dc353e517dfd name=/runtime.v1.ImageService/PullImage
	Nov 09 13:38:00 functional-630518 crio[3580]: time="2025-11-09T13:38:00.405338705Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7dd5aa66-f126-486c-92bc-b411b7381e93 name=/runtime.v1.ImageService/PullImage
	Nov 09 13:38:13 functional-630518 crio[3580]: time="2025-11-09T13:38:13.405667642Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4eeb9a4d-7875-4b60-b042-8bc173827f40 name=/runtime.v1.ImageService/PullImage
	Nov 09 13:39:32 functional-630518 crio[3580]: time="2025-11-09T13:39:32.405357844Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=92c44679-3858-4578-ac16-cbc0dfd9bf70 name=/runtime.v1.ImageService/PullImage
	Nov 09 13:39:47 functional-630518 crio[3580]: time="2025-11-09T13:39:47.405142896Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3f2f5eff-c86f-4cbf-a505-22063605effa name=/runtime.v1.ImageService/PullImage
	Nov 09 13:42:18 functional-630518 crio[3580]: time="2025-11-09T13:42:18.405043112Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=92c6233b-2c44-48e1-9fe8-a51bfdb24bee name=/runtime.v1.ImageService/PullImage
	Nov 09 13:42:28 functional-630518 crio[3580]: time="2025-11-09T13:42:28.405275223Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f701faea-821f-4d5e-88ba-494c335e1a60 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	71621c642b26e       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   829f25c7a6c40       dashboard-metrics-scraper-77bf4d6c4c-6ztkx   kubernetes-dashboard
	66abf8b5119c3       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   7b2fbf9922636       kubernetes-dashboard-855c9754f9-ckrrw        kubernetes-dashboard
	7f51b115944ac       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   8ceb0d60f75f8       busybox-mount                                default
	ee82ae645759f       docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b                  9 minutes ago       Running             myfrontend                  0                   e8688efdd676c       sp-pod                                       default
	0ae5f2c837614       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   312a1f6d4b5ce       nginx-svc                                    default
	109b8d7df296e       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  10 minutes ago      Running             mysql                       0                   f89a08dbc2ce3       mysql-5bb876957f-pk462                       default
	52b9ffb60368f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   0b60a48bb9e29       kube-apiserver-functional-630518             kube-system
	fddd7e567f42b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   b4191050e2dd2       kube-controller-manager-functional-630518    kube-system
	35a8739f18d16       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   cdeba3440251e       etcd-functional-630518                       kube-system
	41457b9c18f84       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   70e599edcc774       kube-scheduler-functional-630518             kube-system
	c67dcb7593b5a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 10 minutes ago      Running             kube-proxy                  1                   232cbb87eaf48       kube-proxy-rjx8z                             kube-system
	157cb16bc4373       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Exited              kube-controller-manager     1                   b4191050e2dd2       kube-controller-manager-functional-630518    kube-system
	8281989e48976       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   ca9c696da496d       coredns-66bc5c9577-f8tpd                     kube-system
	a3ee766a63265       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   6f5e8e5fe4600       storage-provisioner                          kube-system
	92572272e03a8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   0c207613419ff       kindnet-49hwk                                kube-system
	e562f66c1e024       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   ca9c696da496d       coredns-66bc5c9577-f8tpd                     kube-system
	4a7f4cbf22506       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   6f5e8e5fe4600       storage-provisioner                          kube-system
	17afc102318c4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   232cbb87eaf48       kube-proxy-rjx8z                             kube-system
	c9c6886ca29ae       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   0c207613419ff       kindnet-49hwk                                kube-system
	41603547c5069       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   70e599edcc774       kube-scheduler-functional-630518             kube-system
	55379b52f1f87       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   cdeba3440251e       etcd-functional-630518                       kube-system
	
	
	==> coredns [8281989e48976e083937eab9b99160d3fa6a5fd15307f20127c398ede471190b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40006 - 2907 "HINFO IN 2423606012021346318.8513760385399132830. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.104786317s
	
	
	==> coredns [e562f66c1e02489d5c66f8c5c727e8963f7156c980df7eeec487856f4c570177] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60094 - 34893 "HINFO IN 2354604105806563751.2936250522923900933. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.097816821s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-630518
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-630518
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=functional-630518
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T13_34_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:34:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-630518
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 13:46:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 13:44:42 +0000   Sun, 09 Nov 2025 13:34:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 13:44:42 +0000   Sun, 09 Nov 2025 13:34:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 13:44:42 +0000   Sun, 09 Nov 2025 13:34:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 13:44:42 +0000   Sun, 09 Nov 2025 13:35:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-630518
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                1f0c60fd-c2ee-44ab-9725-3a68eef0491c
	  Boot ID:                    f61b57f8-a461-4dd6-b921-37a11bd88f0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-f6sxw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  default                     hello-node-connect-7d85dfc575-hh4m6           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-pk462                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	  kube-system                 coredns-66bc5c9577-f8tpd                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-630518                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-49hwk                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-630518              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-630518     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-rjx8z                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-630518              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-6ztkx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m44s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ckrrw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-630518 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-630518 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-630518 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-630518 event: Registered Node functional-630518 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-630518 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-630518 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-630518 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-630518 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-630518 event: Registered Node functional-630518 in Controller
	
	
	==> dmesg <==
	[  +0.090313] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.571631] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 9 13:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.019189] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023897] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +2.047759] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +4.031604] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +8.447123] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[ +16.382306] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 13:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	
	
	==> etcd [35a8739f18d16be493de48dfecd1df08bddfa419d301a2acd86d5366c5aed6ff] <==
	{"level":"warn","ts":"2025-11-09T13:36:00.550064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:36:00.555771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:36:00.562060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:36:00.574984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:36:00.581794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:36:00.587701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:36:00.594494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:36:00.601880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:36:00.608779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:36:00.614556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:36:00.620472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:36:00.627471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:36:00.633272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:36:00.640516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:36:00.646350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:36:00.652695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:36:00.658493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:36:00.673377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:36:00.679232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:36:00.685811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47192","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T13:36:32.537089Z","caller":"traceutil/trace.go:172","msg":"trace[781259911] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"121.879352ms","start":"2025-11-09T13:36:32.415192Z","end":"2025-11-09T13:36:32.537071Z","steps":["trace[781259911] 'process raft request'  (duration: 72.189752ms)","trace[781259911] 'compare'  (duration: 49.583264ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T13:36:41.722813Z","caller":"traceutil/trace.go:172","msg":"trace[1066670924] transaction","detail":"{read_only:false; response_revision:691; number_of_response:1; }","duration":"134.869005ms","start":"2025-11-09T13:36:41.587924Z","end":"2025-11-09T13:36:41.722793Z","steps":["trace[1066670924] 'process raft request'  (duration: 72.96107ms)","trace[1066670924] 'compare'  (duration: 61.797092ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T13:46:00.276707Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1110}
	{"level":"info","ts":"2025-11-09T13:46:00.295128Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1110,"took":"18.082082ms","hash":2389616489,"current-db-size-bytes":3403776,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1548288,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-09T13:46:00.295161Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2389616489,"revision":1110,"compact-revision":-1}
	
	
	==> etcd [55379b52f1f8768cf21632cfeb2b73175238ce5801abd01fb71d1210652b47e7] <==
	{"level":"warn","ts":"2025-11-09T13:34:56.017164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:34:56.027253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:34:56.033692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:34:56.040353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:34:56.057417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:34:56.069757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:34:56.110564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35206","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T13:35:39.950940Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-09T13:35:39.951012Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-630518","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-09T13:35:39.951102Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-09T13:35:46.953519Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-09T13:35:46.953609Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T13:35:46.953650Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-09T13:35:46.953779Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-09T13:35:46.953804Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-09T13:35:46.954135Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T13:35:46.954174Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T13:35:46.954200Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T13:35:46.954204Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T13:35:46.954215Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-11-09T13:35:46.954211Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T13:35:46.955750Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-09T13:35:46.955799Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T13:35:46.955821Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-09T13:35:46.955827Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-630518","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 13:46:35 up 29 min,  0 user,  load average: 0.00, 0.13, 0.28
	Linux functional-630518 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [92572272e03a8ad99ff3345438856fd84d2c36b46d073063cd56c9eb9379ff28] <==
	I1109 13:44:30.372735       1 main.go:301] handling current node
	I1109 13:44:40.366841       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:44:40.366889       1 main.go:301] handling current node
	I1109 13:44:50.367145       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:44:50.367190       1 main.go:301] handling current node
	I1109 13:45:00.375211       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:45:00.375248       1 main.go:301] handling current node
	I1109 13:45:10.374322       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:45:10.374351       1 main.go:301] handling current node
	I1109 13:45:20.366723       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:45:20.366760       1 main.go:301] handling current node
	I1109 13:45:30.372372       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:45:30.372415       1 main.go:301] handling current node
	I1109 13:45:40.374134       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:45:40.374163       1 main.go:301] handling current node
	I1109 13:45:50.366560       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:45:50.366600       1 main.go:301] handling current node
	I1109 13:46:00.369001       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:46:00.369030       1 main.go:301] handling current node
	I1109 13:46:10.374792       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:46:10.374820       1 main.go:301] handling current node
	I1109 13:46:20.366832       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:46:20.366871       1 main.go:301] handling current node
	I1109 13:46:30.367263       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:46:30.367294       1 main.go:301] handling current node
	
	
	==> kindnet [c9c6886ca29ae3d68f7169d557ca528928d2217ad670d60cabbd7363f28ebc89] <==
	I1109 13:35:05.256840       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 13:35:05.257104       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1109 13:35:05.257226       1 main.go:148] setting mtu 1500 for CNI 
	I1109 13:35:05.257242       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 13:35:05.257263       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T13:35:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 13:35:05.455574       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 13:35:05.455614       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 13:35:05.455630       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 13:35:05.455756       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1109 13:35:05.855757       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 13:35:05.855783       1 metrics.go:72] Registering metrics
	I1109 13:35:05.855857       1 controller.go:711] "Syncing nftables rules"
	I1109 13:35:15.387423       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:35:15.387475       1 main.go:301] handling current node
	I1109 13:35:25.385018       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:35:25.385076       1 main.go:301] handling current node
	I1109 13:35:35.384198       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:35:35.384230       1 main.go:301] handling current node
	
	
	==> kube-apiserver [52b9ffb60368f1469567fe7818099692a8766532b22eaf785abc09163c60f453] <==
	I1109 13:36:01.225188       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 13:36:01.422370       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 13:36:02.090810       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1109 13:36:02.294649       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1109 13:36:02.295897       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 13:36:02.299537       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 13:36:02.735157       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 13:36:02.815271       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 13:36:02.856242       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 13:36:02.861753       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 13:36:05.907159       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 13:36:20.824924       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.7.189"}
	I1109 13:36:25.213365       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.96.206.0"}
	I1109 13:36:28.012749       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.104.2.63"}
	I1109 13:36:33.971595       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.208.61"}
	E1109 13:36:38.335909       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51396: use of closed network connection
	E1109 13:36:39.126999       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51414: use of closed network connection
	I1109 13:36:40.318367       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.101.51.203"}
	E1109 13:36:41.025285       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51468: use of closed network connection
	E1109 13:36:42.183989       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51514: use of closed network connection
	E1109 13:36:48.766989       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53048: use of closed network connection
	I1109 13:36:51.148083       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 13:36:51.260754       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.91.197"}
	I1109 13:36:51.271467       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.107.75"}
	I1109 13:46:01.137993       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [157cb16bc43733adf26550f03e59a74f97e5eeea335ec484f143e56f377cb2ca] <==
	I1109 13:35:48.985931       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint_slice_mirroring"
	I1109 13:35:48.991597       1 controllermanager.go:781] "Started controller" controller="replicaset-controller"
	I1109 13:35:48.991813       1 replica_set.go:243] "Starting controller" logger="replicaset-controller" name="replicaset"
	I1109 13:35:48.991835       1 shared_informer.go:349] "Waiting for caches to sync" controller="ReplicaSet"
	I1109 13:35:48.993826       1 controllermanager.go:781] "Started controller" controller="persistentvolume-binder-controller"
	I1109 13:35:48.993938       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I1109 13:35:48.993961       1 shared_informer.go:349] "Waiting for caches to sync" controller="persistent volume"
	I1109 13:35:48.995915       1 controllermanager.go:781] "Started controller" controller="resourceclaim-controller"
	I1109 13:35:48.995979       1 controller.go:397] "Starting resource claim controller" logger="resourceclaim-controller"
	I1109 13:35:48.996034       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource_claim"
	I1109 13:35:49.012886       1 controllermanager.go:781] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I1109 13:35:49.012986       1 shared_informer.go:349] "Waiting for caches to sync" controller="validatingadmissionpolicy-status"
	I1109 13:35:49.014725       1 controllermanager.go:781] "Started controller" controller="service-cidr-controller"
	I1109 13:35:49.014747       1 controllermanager.go:744] "Warning: controller is disabled" controller="selinux-warning-controller"
	I1109 13:35:49.014919       1 servicecidrs_controller.go:137] "Starting" logger="service-cidr-controller" controller="service-cidr-controller"
	I1109 13:35:49.014941       1 shared_informer.go:349] "Waiting for caches to sync" controller="service-cidr-controller"
	I1109 13:35:49.019726       1 controllermanager.go:781] "Started controller" controller="replicationcontroller-controller"
	I1109 13:35:49.019938       1 replica_set.go:243] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I1109 13:35:49.019956       1 shared_informer.go:349] "Waiting for caches to sync" controller="ReplicationController"
	I1109 13:35:49.046299       1 shared_informer.go:356] "Caches are synced" controller="tokens"
	I1109 13:35:49.943713       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I1109 13:35:49.943896       1 controllermanager.go:781] "Started controller" controller="node-ipam-controller"
	I1109 13:35:49.944312       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I1109 13:35:49.944374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node"
	F1109 13:35:49.944397       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/pvc-protection-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-controller-manager [fddd7e567f42b1f5508dcdd6364d2e44d76c96fecc6333715337304264275b68] <==
	I1109 13:36:04.540326       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 13:36:04.541427       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 13:36:04.541453       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1109 13:36:04.543843       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1109 13:36:04.543895       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1109 13:36:04.543935       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1109 13:36:04.543942       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1109 13:36:04.543947       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1109 13:36:04.544013       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1109 13:36:04.544106       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 13:36:04.544169       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-630518"
	I1109 13:36:04.544208       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1109 13:36:04.545048       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 13:36:04.546126       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 13:36:04.548500       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 13:36:04.550693       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1109 13:36:04.552021       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1109 13:36:04.570694       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1109 13:36:51.193011       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:36:51.201695       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:36:51.209320       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:36:51.215455       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:36:51.219491       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:36:51.224555       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:36:51.228457       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [17afc102318c4d8dd2d7c647d30d41f86889ca21e7de5625a0536b5046fe11bf] <==
	I1109 13:35:05.038538       1 server_linux.go:53] "Using iptables proxy"
	I1109 13:35:05.106634       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 13:35:05.206908       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 13:35:05.206957       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1109 13:35:05.207040       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 13:35:05.229602       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 13:35:05.229692       1 server_linux.go:132] "Using iptables Proxier"
	I1109 13:35:05.236215       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 13:35:05.237166       1 server.go:527] "Version info" version="v1.34.1"
	I1109 13:35:05.237234       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:35:05.239566       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 13:35:05.239584       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 13:35:05.239592       1 config.go:200] "Starting service config controller"
	I1109 13:35:05.239608       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 13:35:05.239613       1 config.go:106] "Starting endpoint slice config controller"
	I1109 13:35:05.239619       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 13:35:05.239742       1 config.go:309] "Starting node config controller"
	I1109 13:35:05.239752       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 13:35:05.239759       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 13:35:05.340686       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 13:35:05.340719       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 13:35:05.340724       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c67dcb7593b5a8e02b7afad8c02a02f7d8c2eaacf8699baf48c68f7345485094] <==
	I1109 13:35:40.067527       1 server_linux.go:53] "Using iptables proxy"
	I1109 13:35:40.134407       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 13:35:40.235346       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 13:35:40.235371       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1109 13:35:40.235435       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 13:35:40.252799       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 13:35:40.252839       1 server_linux.go:132] "Using iptables Proxier"
	I1109 13:35:40.258212       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 13:35:40.258544       1 server.go:527] "Version info" version="v1.34.1"
	I1109 13:35:40.258567       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:35:40.260077       1 config.go:200] "Starting service config controller"
	I1109 13:35:40.260101       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 13:35:40.260169       1 config.go:309] "Starting node config controller"
	I1109 13:35:40.260193       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 13:35:40.260379       1 config.go:106] "Starting endpoint slice config controller"
	I1109 13:35:40.260392       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 13:35:40.261146       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 13:35:40.262483       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 13:35:40.361035       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 13:35:40.361058       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1109 13:35:40.361089       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 13:35:40.362834       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	E1109 13:36:01.146555       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 13:36:01.146563       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1109 13:36:01.146776       1 reflector.go:205] "Failed to watch" err="nodes \"functional-630518\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [41457b9c18f84f600709c801f09b33c8703777f614e0e718fe1eb03e5800b36b] <==
	E1109 13:35:56.824103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 13:35:56.849520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 13:35:57.090100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 13:35:57.255791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 13:35:57.361455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 13:35:57.451226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 13:35:57.473630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 13:35:57.542476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 13:35:57.591560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1109 13:35:57.706441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1109 13:35:57.758876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 13:35:57.793613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1109 13:35:57.898359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 13:35:58.075149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 13:35:58.136835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 13:35:59.730284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 13:36:01.136621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 13:36:01.140772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 13:36:01.140949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 13:36:01.141056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 13:36:01.141154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 13:36:01.141229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1109 13:36:02.183376       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:36:03.083582       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1109 13:36:03.383759       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [41603547c5069f43bafbbb9056f27a91fe5beabd3b81d628f730c81693c231cb] <==
	E1109 13:34:56.532040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 13:34:56.532103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 13:34:56.532127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 13:34:56.532118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 13:34:56.532121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 13:34:56.532201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 13:34:56.532211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 13:34:57.363733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 13:34:57.431103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 13:34:57.447961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 13:34:57.458852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1109 13:34:57.513889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 13:34:57.574910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 13:34:57.608384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 13:34:57.704866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 13:34:57.722794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 13:34:57.729776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 13:34:57.747602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1109 13:35:00.129391       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:35:39.843586       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:35:39.843683       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1109 13:35:39.843844       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1109 13:35:39.843873       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1109 13:35:39.843880       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1109 13:35:39.843904       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 09 13:43:59 functional-630518 kubelet[4310]: E1109 13:43:59.406440    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f6sxw" podUID="cab0502c-58ff-4f3d-90ac-e4c394be1f5b"
	Nov 09 13:44:02 functional-630518 kubelet[4310]: E1109 13:44:02.405095    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hh4m6" podUID="08ba367e-f149-49b8-beb8-78d6727fc499"
	Nov 09 13:44:12 functional-630518 kubelet[4310]: E1109 13:44:12.404708    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f6sxw" podUID="cab0502c-58ff-4f3d-90ac-e4c394be1f5b"
	Nov 09 13:44:15 functional-630518 kubelet[4310]: E1109 13:44:15.404513    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hh4m6" podUID="08ba367e-f149-49b8-beb8-78d6727fc499"
	Nov 09 13:44:23 functional-630518 kubelet[4310]: E1109 13:44:23.405549    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f6sxw" podUID="cab0502c-58ff-4f3d-90ac-e4c394be1f5b"
	Nov 09 13:44:28 functional-630518 kubelet[4310]: E1109 13:44:28.405026    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hh4m6" podUID="08ba367e-f149-49b8-beb8-78d6727fc499"
	Nov 09 13:44:36 functional-630518 kubelet[4310]: E1109 13:44:36.404827    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f6sxw" podUID="cab0502c-58ff-4f3d-90ac-e4c394be1f5b"
	Nov 09 13:44:39 functional-630518 kubelet[4310]: E1109 13:44:39.405739    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hh4m6" podUID="08ba367e-f149-49b8-beb8-78d6727fc499"
	Nov 09 13:44:49 functional-630518 kubelet[4310]: E1109 13:44:49.405256    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f6sxw" podUID="cab0502c-58ff-4f3d-90ac-e4c394be1f5b"
	Nov 09 13:44:50 functional-630518 kubelet[4310]: E1109 13:44:50.405145    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hh4m6" podUID="08ba367e-f149-49b8-beb8-78d6727fc499"
	Nov 09 13:45:04 functional-630518 kubelet[4310]: E1109 13:45:04.404593    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hh4m6" podUID="08ba367e-f149-49b8-beb8-78d6727fc499"
	Nov 09 13:45:04 functional-630518 kubelet[4310]: E1109 13:45:04.404690    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f6sxw" podUID="cab0502c-58ff-4f3d-90ac-e4c394be1f5b"
	Nov 09 13:45:15 functional-630518 kubelet[4310]: E1109 13:45:15.404794    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hh4m6" podUID="08ba367e-f149-49b8-beb8-78d6727fc499"
	Nov 09 13:45:18 functional-630518 kubelet[4310]: E1109 13:45:18.404464    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f6sxw" podUID="cab0502c-58ff-4f3d-90ac-e4c394be1f5b"
	Nov 09 13:45:28 functional-630518 kubelet[4310]: E1109 13:45:28.404669    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hh4m6" podUID="08ba367e-f149-49b8-beb8-78d6727fc499"
	Nov 09 13:45:32 functional-630518 kubelet[4310]: E1109 13:45:32.404843    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f6sxw" podUID="cab0502c-58ff-4f3d-90ac-e4c394be1f5b"
	Nov 09 13:45:42 functional-630518 kubelet[4310]: E1109 13:45:42.404879    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hh4m6" podUID="08ba367e-f149-49b8-beb8-78d6727fc499"
	Nov 09 13:45:45 functional-630518 kubelet[4310]: E1109 13:45:45.405053    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f6sxw" podUID="cab0502c-58ff-4f3d-90ac-e4c394be1f5b"
	Nov 09 13:45:55 functional-630518 kubelet[4310]: E1109 13:45:55.404976    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hh4m6" podUID="08ba367e-f149-49b8-beb8-78d6727fc499"
	Nov 09 13:46:00 functional-630518 kubelet[4310]: E1109 13:46:00.405253    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f6sxw" podUID="cab0502c-58ff-4f3d-90ac-e4c394be1f5b"
	Nov 09 13:46:07 functional-630518 kubelet[4310]: E1109 13:46:07.405200    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hh4m6" podUID="08ba367e-f149-49b8-beb8-78d6727fc499"
	Nov 09 13:46:13 functional-630518 kubelet[4310]: E1109 13:46:13.405461    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f6sxw" podUID="cab0502c-58ff-4f3d-90ac-e4c394be1f5b"
	Nov 09 13:46:21 functional-630518 kubelet[4310]: E1109 13:46:21.404492    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hh4m6" podUID="08ba367e-f149-49b8-beb8-78d6727fc499"
	Nov 09 13:46:28 functional-630518 kubelet[4310]: E1109 13:46:28.405128    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-f6sxw" podUID="cab0502c-58ff-4f3d-90ac-e4c394be1f5b"
	Nov 09 13:46:33 functional-630518 kubelet[4310]: E1109 13:46:33.405412    4310 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-hh4m6" podUID="08ba367e-f149-49b8-beb8-78d6727fc499"
	
	
	==> kubernetes-dashboard [66abf8b5119c3fa2a4b415f6a7dc3e41f5ee2a7cfb73a2bd08c997dc4860616a] <==
	2025/11/09 13:36:54 Using namespace: kubernetes-dashboard
	2025/11/09 13:36:54 Using in-cluster config to connect to apiserver
	2025/11/09 13:36:54 Using secret token for csrf signing
	2025/11/09 13:36:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/09 13:36:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/09 13:36:54 Successful initial request to the apiserver, version: v1.34.1
	2025/11/09 13:36:54 Generating JWE encryption key
	2025/11/09 13:36:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/09 13:36:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/09 13:36:54 Initializing JWE encryption key from synchronized object
	2025/11/09 13:36:54 Creating in-cluster Sidecar client
	2025/11/09 13:36:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 13:36:54 Serving insecurely on HTTP port: 9090
	2025/11/09 13:37:24 Successful request to sidecar
	2025/11/09 13:36:54 Starting overwatch
	
	
	==> storage-provisioner [4a7f4cbf22506ffed01ab007681150e92eea462c5dd1ba459e4bb21ef4ed7129] <==
	W1109 13:35:16.023884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:16.027072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 13:35:16.122007       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-630518_b6eb4749-0a45-4bde-a7b0-e3133be64be2!
	W1109 13:35:18.030058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:18.033269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:20.035836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:20.040161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:22.043191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:22.046784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:24.049209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:24.052809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:26.055398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:26.060357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:28.062915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:28.066393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:30.069683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:30.073769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:32.077225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:32.081744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:34.085147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:34.089178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:36.092035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:36.095272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:38.098091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:38.101399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a3ee766a6326588e988d157b2bd39e9a47a32853d52d7ec5b3bcdaafeebc32fe] <==
	W1109 13:46:09.975365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:11.978176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:11.981975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:13.984731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:13.988078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:15.990241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:15.993533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:17.995936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:18.000094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:20.002378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:20.005501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:22.008133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:22.011431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:24.013679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:24.017271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:26.019735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:26.023918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:28.026245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:28.029591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:30.032016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:30.035134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:32.037707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:32.042032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:34.047565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:46:34.052440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-630518 -n functional-630518
helpers_test.go:269: (dbg) Run:  kubectl --context functional-630518 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-f6sxw hello-node-connect-7d85dfc575-hh4m6
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-630518 describe pod busybox-mount hello-node-75c85bcc94-f6sxw hello-node-connect-7d85dfc575-hh4m6
helpers_test.go:290: (dbg) kubectl --context functional-630518 describe pod busybox-mount hello-node-75c85bcc94-f6sxw hello-node-connect-7d85dfc575-hh4m6:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-630518/192.168.49.2
	Start Time:       Sun, 09 Nov 2025 13:36:43 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://7f51b115944ac1301509f610d31c24ac6e87a548f79a57ffeaaacb588b70d82b
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 09 Nov 2025 13:36:44 +0000
	      Finished:     Sun, 09 Nov 2025 13:36:44 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hflpc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-hflpc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m53s  default-scheduler  Successfully assigned default/busybox-mount to functional-630518
	  Normal  Pulling    9m52s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m52s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 767ms (767ms including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m52s  kubelet            Created container: mount-munger
	  Normal  Started    9m52s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-f6sxw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-630518/192.168.49.2
	Start Time:       Sun, 09 Nov 2025 13:36:40 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jqclv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jqclv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m56s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-f6sxw to functional-630518
	  Normal   Pulling    6m49s (x5 over 9m56s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m49s (x5 over 9m56s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m49s (x5 over 9m56s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m46s (x20 over 9m55s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m32s (x21 over 9m55s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-hh4m6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-630518/192.168.49.2
	Start Time:       Sun, 09 Nov 2025 13:36:33 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4xrkz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4xrkz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hh4m6 to functional-630518
	  Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m56s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m42s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 image load --daemon kicbase/echo-server:functional-630518 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-630518" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 image load --daemon kicbase/echo-server:functional-630518 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-630518 image load --daemon kicbase/echo-server:functional-630518 --alsologtostderr: (1.046760286s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-630518 image ls: (1.577168529s)
functional_test.go:461: expected "kicbase/echo-server:functional-630518" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-630518
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 image load --daemon kicbase/echo-server:functional-630518 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-630518 image load --daemon kicbase/echo-server:functional-630518 --alsologtostderr: (1.847368866s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-630518" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 image save kicbase/echo-server:functional-630518 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1109 13:36:33.241846   44291 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:36:33.242185   44291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:36:33.242200   44291 out.go:374] Setting ErrFile to fd 2...
	I1109 13:36:33.242206   44291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:36:33.242468   44291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:36:33.243142   44291 config.go:182] Loaded profile config "functional-630518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:36:33.243259   44291 config.go:182] Loaded profile config "functional-630518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:36:33.243715   44291 cli_runner.go:164] Run: docker container inspect functional-630518 --format={{.State.Status}}
	I1109 13:36:33.266214   44291 ssh_runner.go:195] Run: systemctl --version
	I1109 13:36:33.266288   44291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630518
	I1109 13:36:33.285262   44291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/functional-630518/id_rsa Username:docker}
	I1109 13:36:33.377578   44291 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1109 13:36:33.377633   44291 cache_images.go:255] Failed to load cached images for "functional-630518": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1109 13:36:33.377671   44291 cache_images.go:267] failed pushing to: functional-630518

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-630518
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 image save --daemon kicbase/echo-server:functional-630518 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-630518
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-630518: exit status 1 (17.520111ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-630518

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-630518

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-630518 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-630518 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-f6sxw" [cab0502c-58ff-4f3d-90ac-e4c394be1f5b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-630518 -n functional-630518
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-09 13:46:40.626170237 +0000 UTC m=+1079.362657266
functional_test.go:1460: (dbg) Run:  kubectl --context functional-630518 describe po hello-node-75c85bcc94-f6sxw -n default
functional_test.go:1460: (dbg) kubectl --context functional-630518 describe po hello-node-75c85bcc94-f6sxw -n default:
Name:             hello-node-75c85bcc94-f6sxw
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-630518/192.168.49.2
Start Time:       Sun, 09 Nov 2025 13:36:40 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jqclv (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-jqclv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-f6sxw to functional-630518
Normal   Pulling    6m53s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m53s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m53s (x5 over 10m)     kubelet            Error: ErrImagePull
Warning  Failed     4m50s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m36s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-630518 logs hello-node-75c85bcc94-f6sxw -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-630518 logs hello-node-75c85bcc94-f6sxw -n default: exit status 1 (57.14106ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-f6sxw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-630518 logs hello-node-75c85bcc94-f6sxw -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-630518 service --namespace=default --https --url hello-node: exit status 115 (513.817063ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30618
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-630518 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-630518 service hello-node --url --format={{.IP}}: exit status 115 (509.465603ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-630518 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-630518 service hello-node --url: exit status 115 (512.284869ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30618
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-630518 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30618
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.37s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-926498 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-926498 --output=json --user=testUser: exit status 80 (2.367094439s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e5e7798a-04dc-4d6a-aae8-9479f0f1a098","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-926498 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"5ca91018-4fd7-4469-a78b-9342e216c42f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-09T13:56:12Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"b27f1020-6d60-4373-885d-bc92e81a6fe2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-926498 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.37s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.93s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-926498 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-926498 --output=json --user=testUser: exit status 80 (1.92782027s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f6958ad3-76dd-4f6a-b77b-8df9f17ee3ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-926498 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"f5dab118-cc50-4dc8-b71a-6e5763f8bf40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-09T13:56:14Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"65fe83c5-51c6-4507-9101-d230c4edcde9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-926498 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.93s)

                                                
                                    
x
+
TestPause/serial/Pause (5.69s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-092489 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-092489 --alsologtostderr -v=5: exit status 80 (2.322962506s)

                                                
                                                
-- stdout --
	* Pausing node pause-092489 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:11:22.214605  231643 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:11:22.214720  231643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:11:22.214734  231643 out.go:374] Setting ErrFile to fd 2...
	I1109 14:11:22.214738  231643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:11:22.214967  231643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:11:22.215191  231643 out.go:368] Setting JSON to false
	I1109 14:11:22.215233  231643 mustload.go:66] Loading cluster: pause-092489
	I1109 14:11:22.215550  231643 config.go:182] Loaded profile config "pause-092489": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:11:22.215947  231643 cli_runner.go:164] Run: docker container inspect pause-092489 --format={{.State.Status}}
	I1109 14:11:22.235944  231643 host.go:66] Checking if "pause-092489" exists ...
	I1109 14:11:22.236249  231643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:11:22.296377  231643 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-09 14:11:22.285080147 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:11:22.297140  231643 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-092489 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1109 14:11:22.298665  231643 out.go:179] * Pausing node pause-092489 ... 
	I1109 14:11:22.299758  231643 host.go:66] Checking if "pause-092489" exists ...
	I1109 14:11:22.299999  231643 ssh_runner.go:195] Run: systemctl --version
	I1109 14:11:22.300039  231643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-092489
	I1109 14:11:22.318010  231643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/pause-092489/id_rsa Username:docker}
	I1109 14:11:22.414776  231643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:11:22.427327  231643 pause.go:52] kubelet running: true
	I1109 14:11:22.427373  231643 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:11:22.567396  231643 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:11:22.567532  231643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:11:22.641461  231643 cri.go:89] found id: "2b5b10f4f3f84bb026e0851a309a45699c18e38c6152d977136df4d3d7ce824f"
	I1109 14:11:22.641491  231643 cri.go:89] found id: "f7663e0568a65ffaaf3c55965b13b778d4f8efa35939c24622c19d8f69d183fa"
	I1109 14:11:22.641497  231643 cri.go:89] found id: "e255085db448c22caf3ee1ab534518079a741907de0468fc851c20ed70ff553a"
	I1109 14:11:22.641502  231643 cri.go:89] found id: "d330e1ae80e3a30bfdc4b113f673bf973656704d0579d016dd61b77458e2c396"
	I1109 14:11:22.641506  231643 cri.go:89] found id: "8d4c0bf15d6f73166eefeb2a225847ad6656500266e6c4258eabfbaab89d2c19"
	I1109 14:11:22.641511  231643 cri.go:89] found id: "8d495cb1f952ddb415c3231fdd998bdecdb92da5f5d74d99731326c32330b72d"
	I1109 14:11:22.641515  231643 cri.go:89] found id: "e21ada7ed93b3e0e01d2725059c4155c569e0c4090e0e3079f02bc81d5274700"
	I1109 14:11:22.641519  231643 cri.go:89] found id: ""
	I1109 14:11:22.641562  231643 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:11:22.655516  231643 retry.go:31] will retry after 165.434853ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:11:22Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:11:22.821811  231643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:11:22.833715  231643 pause.go:52] kubelet running: false
	I1109 14:11:22.833768  231643 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:11:22.951191  231643 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:11:22.951257  231643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:11:23.019165  231643 cri.go:89] found id: "2b5b10f4f3f84bb026e0851a309a45699c18e38c6152d977136df4d3d7ce824f"
	I1109 14:11:23.019195  231643 cri.go:89] found id: "f7663e0568a65ffaaf3c55965b13b778d4f8efa35939c24622c19d8f69d183fa"
	I1109 14:11:23.019201  231643 cri.go:89] found id: "e255085db448c22caf3ee1ab534518079a741907de0468fc851c20ed70ff553a"
	I1109 14:11:23.019206  231643 cri.go:89] found id: "d330e1ae80e3a30bfdc4b113f673bf973656704d0579d016dd61b77458e2c396"
	I1109 14:11:23.019210  231643 cri.go:89] found id: "8d4c0bf15d6f73166eefeb2a225847ad6656500266e6c4258eabfbaab89d2c19"
	I1109 14:11:23.019215  231643 cri.go:89] found id: "8d495cb1f952ddb415c3231fdd998bdecdb92da5f5d74d99731326c32330b72d"
	I1109 14:11:23.019219  231643 cri.go:89] found id: "e21ada7ed93b3e0e01d2725059c4155c569e0c4090e0e3079f02bc81d5274700"
	I1109 14:11:23.019223  231643 cri.go:89] found id: ""
	I1109 14:11:23.019281  231643 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:11:23.031514  231643 retry.go:31] will retry after 555.984791ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:11:23Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:11:23.587826  231643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:11:23.600210  231643 pause.go:52] kubelet running: false
	I1109 14:11:23.600257  231643 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:11:23.705425  231643 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:11:23.705513  231643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:11:23.768952  231643 cri.go:89] found id: "2b5b10f4f3f84bb026e0851a309a45699c18e38c6152d977136df4d3d7ce824f"
	I1109 14:11:23.768971  231643 cri.go:89] found id: "f7663e0568a65ffaaf3c55965b13b778d4f8efa35939c24622c19d8f69d183fa"
	I1109 14:11:23.768977  231643 cri.go:89] found id: "e255085db448c22caf3ee1ab534518079a741907de0468fc851c20ed70ff553a"
	I1109 14:11:23.768981  231643 cri.go:89] found id: "d330e1ae80e3a30bfdc4b113f673bf973656704d0579d016dd61b77458e2c396"
	I1109 14:11:23.768985  231643 cri.go:89] found id: "8d4c0bf15d6f73166eefeb2a225847ad6656500266e6c4258eabfbaab89d2c19"
	I1109 14:11:23.768989  231643 cri.go:89] found id: "8d495cb1f952ddb415c3231fdd998bdecdb92da5f5d74d99731326c32330b72d"
	I1109 14:11:23.768992  231643 cri.go:89] found id: "e21ada7ed93b3e0e01d2725059c4155c569e0c4090e0e3079f02bc81d5274700"
	I1109 14:11:23.768996  231643 cri.go:89] found id: ""
	I1109 14:11:23.769035  231643 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:11:23.780324  231643 retry.go:31] will retry after 491.808866ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:11:23Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:11:24.272845  231643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:11:24.285806  231643 pause.go:52] kubelet running: false
	I1109 14:11:24.285862  231643 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:11:24.393903  231643 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:11:24.393989  231643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:11:24.457457  231643 cri.go:89] found id: "2b5b10f4f3f84bb026e0851a309a45699c18e38c6152d977136df4d3d7ce824f"
	I1109 14:11:24.457482  231643 cri.go:89] found id: "f7663e0568a65ffaaf3c55965b13b778d4f8efa35939c24622c19d8f69d183fa"
	I1109 14:11:24.457487  231643 cri.go:89] found id: "e255085db448c22caf3ee1ab534518079a741907de0468fc851c20ed70ff553a"
	I1109 14:11:24.457490  231643 cri.go:89] found id: "d330e1ae80e3a30bfdc4b113f673bf973656704d0579d016dd61b77458e2c396"
	I1109 14:11:24.457492  231643 cri.go:89] found id: "8d4c0bf15d6f73166eefeb2a225847ad6656500266e6c4258eabfbaab89d2c19"
	I1109 14:11:24.457495  231643 cri.go:89] found id: "8d495cb1f952ddb415c3231fdd998bdecdb92da5f5d74d99731326c32330b72d"
	I1109 14:11:24.457497  231643 cri.go:89] found id: "e21ada7ed93b3e0e01d2725059c4155c569e0c4090e0e3079f02bc81d5274700"
	I1109 14:11:24.457509  231643 cri.go:89] found id: ""
	I1109 14:11:24.457545  231643 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:11:24.470605  231643 out.go:203] 
	W1109 14:11:24.471704  231643 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:11:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:11:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 14:11:24.471729  231643 out.go:285] * 
	* 
	W1109 14:11:24.475882  231643 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 14:11:24.477024  231643 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-092489 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-092489
helpers_test.go:243: (dbg) docker inspect pause-092489:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3283112c9e94706d89c0a48afd5afd06921d7724dd08a41e55a2aa8d7358b447",
	        "Created": "2025-11-09T14:10:40.253996241Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 220122,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:10:40.291470556Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/3283112c9e94706d89c0a48afd5afd06921d7724dd08a41e55a2aa8d7358b447/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3283112c9e94706d89c0a48afd5afd06921d7724dd08a41e55a2aa8d7358b447/hostname",
	        "HostsPath": "/var/lib/docker/containers/3283112c9e94706d89c0a48afd5afd06921d7724dd08a41e55a2aa8d7358b447/hosts",
	        "LogPath": "/var/lib/docker/containers/3283112c9e94706d89c0a48afd5afd06921d7724dd08a41e55a2aa8d7358b447/3283112c9e94706d89c0a48afd5afd06921d7724dd08a41e55a2aa8d7358b447-json.log",
	        "Name": "/pause-092489",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-092489:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-092489",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3283112c9e94706d89c0a48afd5afd06921d7724dd08a41e55a2aa8d7358b447",
	                "LowerDir": "/var/lib/docker/overlay2/9650de60d0693888c238c1d66c538850cf40b26a7e9ad7d964d55dabc981a0b8-init/diff:/var/lib/docker/overlay2/d0c6d4b4ffbfb6af64ec3408030fc54111fe30d500088a5ae947d598cfc72f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9650de60d0693888c238c1d66c538850cf40b26a7e9ad7d964d55dabc981a0b8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9650de60d0693888c238c1d66c538850cf40b26a7e9ad7d964d55dabc981a0b8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9650de60d0693888c238c1d66c538850cf40b26a7e9ad7d964d55dabc981a0b8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-092489",
	                "Source": "/var/lib/docker/volumes/pause-092489/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-092489",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-092489",
	                "name.minikube.sigs.k8s.io": "pause-092489",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2e01f7eb320ca5aae66a6badb04816610aef4f10dbf4fba8b14c54954bc7923a",
	            "SandboxKey": "/var/run/docker/netns/2e01f7eb320c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33045"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33046"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33047"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-092489": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:60:7f:62:be:4d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8ec64412f6290172992317511185500a29434ee189e01028126e0e8cf658a217",
	                    "EndpointID": "74a56f070bedbe95ea1f9ce80cbc203108d2f435037a9846d4d95e1f88f10e88",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-092489",
	                        "3283112c9e94"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-092489 -n pause-092489
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-092489 -n pause-092489: exit status 2 (306.330956ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-092489 logs -n 25
E1109 14:11:25.253457    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-593530 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo docker system info                                                                                                                                                                                                      │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo containerd config dump                                                                                                                                                                                                  │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo crio config                                                                                                                                                                                                             │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ delete  │ -p cilium-593530                                                                                                                                                                                                                              │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │ 09 Nov 25 14:10 UTC │
	│ start   │ -p cert-options-350702 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-350702    │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │ 09 Nov 25 14:11 UTC │
	│ ssh     │ cert-options-350702 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-350702    │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ ssh     │ -p cert-options-350702 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-350702    │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ delete  │ -p cert-options-350702                                                                                                                                                                                                                        │ cert-options-350702    │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ start   │ -p pause-092489 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-092489           │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ start   │ -p old-k8s-version-169816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-169816 │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │                     │
	│ pause   │ -p pause-092489 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-092489           │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:11:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:11:14.433122  228825 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:11:14.433365  228825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:11:14.433373  228825 out.go:374] Setting ErrFile to fd 2...
	I1109 14:11:14.433378  228825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:11:14.433541  228825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:11:14.434028  228825 out.go:368] Setting JSON to false
	I1109 14:11:14.435048  228825 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3224,"bootTime":1762694250,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:11:14.435124  228825 start.go:143] virtualization: kvm guest
	I1109 14:11:14.436963  228825 out.go:179] * [old-k8s-version-169816] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:11:14.438112  228825 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:11:14.438112  228825 notify.go:221] Checking for updates...
	I1109 14:11:14.439240  228825 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:11:14.440338  228825 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:11:14.441779  228825 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 14:11:14.442807  228825 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:11:14.443981  228825 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:11:14.445611  228825 config.go:182] Loaded profile config "cert-expiration-883873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:11:14.445766  228825 config.go:182] Loaded profile config "kubernetes-upgrade-755159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:11:14.445957  228825 config.go:182] Loaded profile config "pause-092489": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:11:14.446063  228825 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:11:14.469634  228825 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 14:11:14.469779  228825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:11:14.528499  228825 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-09 14:11:14.51874498 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:11:14.528637  228825 docker.go:319] overlay module found
	I1109 14:11:14.530001  228825 out.go:179] * Using the docker driver based on user configuration
	I1109 14:11:14.530944  228825 start.go:309] selected driver: docker
	I1109 14:11:14.530960  228825 start.go:930] validating driver "docker" against <nil>
	I1109 14:11:14.530979  228825 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:11:14.531522  228825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:11:14.589269  228825 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-09 14:11:14.578584959 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:11:14.589455  228825 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 14:11:14.589679  228825 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:11:14.591083  228825 out.go:179] * Using Docker driver with root privileges
	I1109 14:11:14.592060  228825 cni.go:84] Creating CNI manager for ""
	I1109 14:11:14.592125  228825 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:11:14.592138  228825 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 14:11:14.592193  228825 start.go:353] cluster config:
	{Name:old-k8s-version-169816 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-169816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:11:14.593288  228825 out.go:179] * Starting "old-k8s-version-169816" primary control-plane node in "old-k8s-version-169816" cluster
	I1109 14:11:14.594161  228825 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:11:14.595160  228825 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:11:14.596239  228825 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1109 14:11:14.596276  228825 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1109 14:11:14.596290  228825 cache.go:65] Caching tarball of preloaded images
	I1109 14:11:14.596332  228825 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:11:14.596400  228825 preload.go:238] Found /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 14:11:14.596416  228825 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1109 14:11:14.596533  228825 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/config.json ...
	I1109 14:11:14.596565  228825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/config.json: {Name:mk13069d07b835bb3fb802a66fbc1e8d8b175551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:14.617803  228825 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:11:14.617826  228825 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:11:14.617844  228825 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:11:14.617874  228825 start.go:360] acquireMachinesLock for old-k8s-version-169816: {Name:mkedf065ffc7d3ee8fd51a7c60a11da8a2f72508 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:11:14.617971  228825 start.go:364] duration metric: took 79.183µs to acquireMachinesLock for "old-k8s-version-169816"
	I1109 14:11:14.617995  228825 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-169816 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-169816 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:11:14.618080  228825 start.go:125] createHost starting for "" (driver="docker")
	I1109 14:11:13.982712  228465 out.go:252] * Updating the running docker "pause-092489" container ...
	I1109 14:11:13.982754  228465 machine.go:94] provisionDockerMachine start ...
	I1109 14:11:13.982848  228465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-092489
	I1109 14:11:13.999841  228465 main.go:143] libmachine: Using SSH client type: native
	I1109 14:11:14.000101  228465 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33045 <nil> <nil>}
	I1109 14:11:14.000114  228465 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:11:14.124883  228465 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-092489
	
	I1109 14:11:14.124917  228465 ubuntu.go:182] provisioning hostname "pause-092489"
	I1109 14:11:14.124976  228465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-092489
	I1109 14:11:14.142898  228465 main.go:143] libmachine: Using SSH client type: native
	I1109 14:11:14.143161  228465 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33045 <nil> <nil>}
	I1109 14:11:14.143176  228465 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-092489 && echo "pause-092489" | sudo tee /etc/hostname
	I1109 14:11:14.281008  228465 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-092489
	
	I1109 14:11:14.281073  228465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-092489
	I1109 14:11:14.299970  228465 main.go:143] libmachine: Using SSH client type: native
	I1109 14:11:14.300193  228465 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33045 <nil> <nil>}
	I1109 14:11:14.300216  228465 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-092489' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-092489/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-092489' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:11:14.427271  228465 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:11:14.427300  228465 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-5854/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-5854/.minikube}
	I1109 14:11:14.427317  228465 ubuntu.go:190] setting up certificates
	I1109 14:11:14.427334  228465 provision.go:84] configureAuth start
	I1109 14:11:14.427378  228465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-092489
	I1109 14:11:14.447104  228465 provision.go:143] copyHostCerts
	I1109 14:11:14.447166  228465 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem, removing ...
	I1109 14:11:14.447185  228465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem
	I1109 14:11:14.447272  228465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem (1123 bytes)
	I1109 14:11:14.447423  228465 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem, removing ...
	I1109 14:11:14.447444  228465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem
	I1109 14:11:14.447486  228465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem (1679 bytes)
	I1109 14:11:14.447587  228465 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem, removing ...
	I1109 14:11:14.447598  228465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem
	I1109 14:11:14.447634  228465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem (1078 bytes)
	I1109 14:11:14.447723  228465 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem org=jenkins.pause-092489 san=[127.0.0.1 192.168.103.2 localhost minikube pause-092489]
	I1109 14:11:14.543595  228465 provision.go:177] copyRemoteCerts
	I1109 14:11:14.543679  228465 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:11:14.543722  228465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-092489
	I1109 14:11:14.566434  228465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/pause-092489/id_rsa Username:docker}
	I1109 14:11:14.664136  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 14:11:14.682161  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1109 14:11:14.701792  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:11:14.719557  228465 provision.go:87] duration metric: took 292.210982ms to configureAuth
	I1109 14:11:14.719585  228465 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:11:14.719772  228465 config.go:182] Loaded profile config "pause-092489": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:11:14.719852  228465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-092489
	I1109 14:11:14.738348  228465 main.go:143] libmachine: Using SSH client type: native
	I1109 14:11:14.738630  228465 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33045 <nil> <nil>}
	I1109 14:11:14.738698  228465 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:11:15.052519  228465 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:11:15.052545  228465 machine.go:97] duration metric: took 1.069769509s to provisionDockerMachine
	I1109 14:11:15.052559  228465 start.go:293] postStartSetup for "pause-092489" (driver="docker")
	I1109 14:11:15.052571  228465 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:11:15.052663  228465 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:11:15.052713  228465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-092489
	I1109 14:11:15.073932  228465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/pause-092489/id_rsa Username:docker}
	I1109 14:11:15.180117  228465 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:11:15.183715  228465 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:11:15.183745  228465 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:11:15.183756  228465 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/addons for local assets ...
	I1109 14:11:15.183804  228465 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/files for local assets ...
	I1109 14:11:15.183873  228465 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem -> 93652.pem in /etc/ssl/certs
	I1109 14:11:15.183964  228465 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:11:15.192797  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:11:15.209810  228465 start.go:296] duration metric: took 157.237895ms for postStartSetup
	I1109 14:11:15.209880  228465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:11:15.209925  228465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-092489
	I1109 14:11:15.231110  228465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/pause-092489/id_rsa Username:docker}
	I1109 14:11:15.334127  228465 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:11:15.339126  228465 fix.go:56] duration metric: took 1.377151789s for fixHost
	I1109 14:11:15.339154  228465 start.go:83] releasing machines lock for "pause-092489", held for 1.377206222s
	I1109 14:11:15.339230  228465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-092489
	I1109 14:11:15.356999  228465 ssh_runner.go:195] Run: cat /version.json
	I1109 14:11:15.357051  228465 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:11:15.357057  228465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-092489
	I1109 14:11:15.357105  228465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-092489
	I1109 14:11:15.376528  228465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/pause-092489/id_rsa Username:docker}
	I1109 14:11:15.376866  228465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/pause-092489/id_rsa Username:docker}
	I1109 14:11:15.548199  228465 ssh_runner.go:195] Run: systemctl --version
	I1109 14:11:15.554701  228465 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:11:15.588394  228465 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:11:15.593081  228465 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:11:15.593136  228465 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:11:15.601406  228465 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:11:15.601432  228465 start.go:496] detecting cgroup driver to use...
	I1109 14:11:15.601464  228465 detect.go:190] detected "systemd" cgroup driver on host os
	I1109 14:11:15.601515  228465 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:11:15.615546  228465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:11:15.628182  228465 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:11:15.628250  228465 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:11:15.643522  228465 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:11:15.655578  228465 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:11:15.765851  228465 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:11:15.878148  228465 docker.go:234] disabling docker service ...
	I1109 14:11:15.878203  228465 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:11:15.893401  228465 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:11:15.906430  228465 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:11:16.015567  228465 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:11:16.131691  228465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:11:16.144365  228465 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:11:16.158466  228465 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:11:16.158512  228465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:16.167488  228465 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1109 14:11:16.167549  228465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:16.176841  228465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:16.185834  228465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:16.194323  228465 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:11:16.203017  228465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:16.212414  228465 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:16.220619  228465 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:16.229483  228465 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:11:16.237238  228465 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:11:16.245695  228465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:11:16.382251  228465 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:11:16.180162  188127 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1109 14:11:16.180599  188127 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1109 14:11:16.180671  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1109 14:11:16.180726  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1109 14:11:16.208145  188127 cri.go:89] found id: "278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:16.208163  188127 cri.go:89] found id: ""
	I1109 14:11:16.208172  188127 logs.go:282] 1 containers: [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689]
	I1109 14:11:16.208221  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:16.212212  188127 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1109 14:11:16.212272  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1109 14:11:16.241268  188127 cri.go:89] found id: ""
	I1109 14:11:16.241294  188127 logs.go:282] 0 containers: []
	W1109 14:11:16.241304  188127 logs.go:284] No container was found matching "etcd"
	I1109 14:11:16.241312  188127 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1109 14:11:16.241359  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1109 14:11:16.271861  188127 cri.go:89] found id: ""
	I1109 14:11:16.271885  188127 logs.go:282] 0 containers: []
	W1109 14:11:16.271893  188127 logs.go:284] No container was found matching "coredns"
	I1109 14:11:16.271900  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1109 14:11:16.271950  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1109 14:11:16.307010  188127 cri.go:89] found id: "5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:16.307041  188127 cri.go:89] found id: ""
	I1109 14:11:16.307052  188127 logs.go:282] 1 containers: [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82]
	I1109 14:11:16.307107  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:16.311855  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1109 14:11:16.311918  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1109 14:11:16.340890  188127 cri.go:89] found id: ""
	I1109 14:11:16.340916  188127 logs.go:282] 0 containers: []
	W1109 14:11:16.340927  188127 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:11:16.340935  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1109 14:11:16.340996  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1109 14:11:16.371701  188127 cri.go:89] found id: "b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:16.371726  188127 cri.go:89] found id: "b02359ee13a07cadd6359d3fb2ebf8cca84546b60e170eaf5853736affecd2d4"
	I1109 14:11:16.371732  188127 cri.go:89] found id: ""
	I1109 14:11:16.371742  188127 logs.go:282] 2 containers: [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd b02359ee13a07cadd6359d3fb2ebf8cca84546b60e170eaf5853736affecd2d4]
	I1109 14:11:16.371798  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:16.375997  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:16.380227  188127 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1109 14:11:16.380279  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1109 14:11:16.412068  188127 cri.go:89] found id: ""
	I1109 14:11:16.412097  188127 logs.go:282] 0 containers: []
	W1109 14:11:16.412107  188127 logs.go:284] No container was found matching "kindnet"
	I1109 14:11:16.412115  188127 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1109 14:11:16.412171  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1109 14:11:16.438766  188127 cri.go:89] found id: ""
	I1109 14:11:16.438788  188127 logs.go:282] 0 containers: []
	W1109 14:11:16.438796  188127 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:11:16.438810  188127 logs.go:123] Gathering logs for kubelet ...
	I1109 14:11:16.438822  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:11:16.521585  188127 logs.go:123] Gathering logs for dmesg ...
	I1109 14:11:16.521629  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:11:16.538010  188127 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:11:16.538047  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:11:16.594149  188127 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:11:16.594175  188127 logs.go:123] Gathering logs for kube-scheduler [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82] ...
	I1109 14:11:16.594193  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:16.662427  188127 logs.go:123] Gathering logs for kube-controller-manager [b02359ee13a07cadd6359d3fb2ebf8cca84546b60e170eaf5853736affecd2d4] ...
	I1109 14:11:16.662471  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b02359ee13a07cadd6359d3fb2ebf8cca84546b60e170eaf5853736affecd2d4"
	I1109 14:11:16.691486  188127 logs.go:123] Gathering logs for CRI-O ...
	I1109 14:11:16.691523  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1109 14:11:16.737113  188127 logs.go:123] Gathering logs for container status ...
	I1109 14:11:16.737147  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:11:16.769940  188127 logs.go:123] Gathering logs for kube-apiserver [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689] ...
	I1109 14:11:16.769983  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:16.802590  188127 logs.go:123] Gathering logs for kube-controller-manager [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd] ...
	I1109 14:11:16.802619  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:19.099666  228465 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.717360765s)
	I1109 14:11:19.099700  228465 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:11:19.099747  228465 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:11:19.104785  228465 start.go:564] Will wait 60s for crictl version
	I1109 14:11:19.104833  228465 ssh_runner.go:195] Run: which crictl
	I1109 14:11:19.108561  228465 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:11:19.132970  228465 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:11:19.133033  228465 ssh_runner.go:195] Run: crio --version
	I1109 14:11:19.164286  228465 ssh_runner.go:195] Run: crio --version
	I1109 14:11:19.193242  228465 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:11:14.619545  228825 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1109 14:11:14.619760  228825 start.go:159] libmachine.API.Create for "old-k8s-version-169816" (driver="docker")
	I1109 14:11:14.619791  228825 client.go:173] LocalClient.Create starting
	I1109 14:11:14.619870  228825 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem
	I1109 14:11:14.619909  228825 main.go:143] libmachine: Decoding PEM data...
	I1109 14:11:14.619938  228825 main.go:143] libmachine: Parsing certificate...
	I1109 14:11:14.620017  228825 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem
	I1109 14:11:14.620048  228825 main.go:143] libmachine: Decoding PEM data...
	I1109 14:11:14.620063  228825 main.go:143] libmachine: Parsing certificate...
	I1109 14:11:14.620387  228825 cli_runner.go:164] Run: docker network inspect old-k8s-version-169816 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 14:11:14.636497  228825 cli_runner.go:211] docker network inspect old-k8s-version-169816 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 14:11:14.636557  228825 network_create.go:284] running [docker network inspect old-k8s-version-169816] to gather additional debugging logs...
	I1109 14:11:14.636576  228825 cli_runner.go:164] Run: docker network inspect old-k8s-version-169816
	W1109 14:11:14.652136  228825 cli_runner.go:211] docker network inspect old-k8s-version-169816 returned with exit code 1
	I1109 14:11:14.652158  228825 network_create.go:287] error running [docker network inspect old-k8s-version-169816]: docker network inspect old-k8s-version-169816: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-169816 not found
	I1109 14:11:14.652168  228825 network_create.go:289] output of [docker network inspect old-k8s-version-169816]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-169816 not found
	
	** /stderr **
	I1109 14:11:14.652301  228825 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:11:14.669546  228825 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c085420cd01a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:dd:1a:ae:de:73} reservation:<nil>}
	I1109 14:11:14.670484  228825 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-227a1511ff5a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4a:00:84:88:a9:17} reservation:<nil>}
	I1109 14:11:14.671341  228825 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9196665a99b4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:6c:f7:d0:28:4f} reservation:<nil>}
	I1109 14:11:14.672188  228825 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e75730}
	I1109 14:11:14.672208  228825 network_create.go:124] attempt to create docker network old-k8s-version-169816 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1109 14:11:14.672253  228825 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-169816 old-k8s-version-169816
	I1109 14:11:14.733198  228825 network_create.go:108] docker network old-k8s-version-169816 192.168.76.0/24 created
	I1109 14:11:14.733226  228825 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-169816" container
	I1109 14:11:14.733275  228825 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 14:11:14.751440  228825 cli_runner.go:164] Run: docker volume create old-k8s-version-169816 --label name.minikube.sigs.k8s.io=old-k8s-version-169816 --label created_by.minikube.sigs.k8s.io=true
	I1109 14:11:14.769040  228825 oci.go:103] Successfully created a docker volume old-k8s-version-169816
	I1109 14:11:14.769115  228825 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-169816-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-169816 --entrypoint /usr/bin/test -v old-k8s-version-169816:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 14:11:15.143818  228825 oci.go:107] Successfully prepared a docker volume old-k8s-version-169816
	I1109 14:11:15.143878  228825 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1109 14:11:15.143886  228825 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 14:11:15.143942  228825 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-169816:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 14:11:19.038182  228825 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-169816:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.894188837s)
	I1109 14:11:19.038211  228825 kic.go:203] duration metric: took 3.894322602s to extract preloaded images to volume ...
	W1109 14:11:19.038309  228825 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1109 14:11:19.038340  228825 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1109 14:11:19.038382  228825 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 14:11:19.097924  228825 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-169816 --name old-k8s-version-169816 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-169816 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-169816 --network old-k8s-version-169816 --ip 192.168.76.2 --volume old-k8s-version-169816:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 14:11:19.430702  228825 cli_runner.go:164] Run: docker container inspect old-k8s-version-169816 --format={{.State.Running}}
	I1109 14:11:19.194310  228465 cli_runner.go:164] Run: docker network inspect pause-092489 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:11:19.211021  228465 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1109 14:11:19.214970  228465 kubeadm.go:884] updating cluster {Name:pause-092489 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-092489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regis
try-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:11:19.215114  228465 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:11:19.215163  228465 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:11:19.244428  228465 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:11:19.244449  228465 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:11:19.244500  228465 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:11:19.279598  228465 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:11:19.279627  228465 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:11:19.279653  228465 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1109 14:11:19.280017  228465 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-092489 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-092489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:11:19.280105  228465 ssh_runner.go:195] Run: crio config
	I1109 14:11:19.329078  228465 cni.go:84] Creating CNI manager for ""
	I1109 14:11:19.329096  228465 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:11:19.329107  228465 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:11:19.329126  228465 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-092489 NodeName:pause-092489 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:11:19.329239  228465 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-092489"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:11:19.329296  228465 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:11:19.339152  228465 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:11:19.339223  228465 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:11:19.347204  228465 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 14:11:19.361081  228465 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:11:19.374884  228465 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1109 14:11:19.388903  228465 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:11:19.392893  228465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:11:19.535122  228465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:11:19.550858  228465 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489 for IP: 192.168.103.2
	I1109 14:11:19.550879  228465 certs.go:195] generating shared ca certs ...
	I1109 14:11:19.550898  228465 certs.go:227] acquiring lock for ca certs: {Name:mk1fbc7fb5aaf0e87090d75145f91c095ef07289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:19.551056  228465 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key
	I1109 14:11:19.551111  228465 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key
	I1109 14:11:19.551124  228465 certs.go:257] generating profile certs ...
	I1109 14:11:19.551283  228465 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/client.key
	I1109 14:11:19.551359  228465 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/apiserver.key.451f2da0
	I1109 14:11:19.551414  228465 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/proxy-client.key
	I1109 14:11:19.551576  228465 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem (1338 bytes)
	W1109 14:11:19.551620  228465 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365_empty.pem, impossibly tiny 0 bytes
	I1109 14:11:19.551629  228465 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:11:19.551718  228465 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 14:11:19.551750  228465 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:11:19.551780  228465 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 14:11:19.551835  228465 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:11:19.552728  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:11:19.572308  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:11:19.596767  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:11:19.614677  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:11:19.633350  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1109 14:11:19.653028  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:11:19.676167  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:11:19.696461  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:11:19.716292  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /usr/share/ca-certificates/93652.pem (1708 bytes)
	I1109 14:11:19.745095  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:11:19.767471  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem --> /usr/share/ca-certificates/9365.pem (1338 bytes)
	I1109 14:11:19.786236  228465 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:11:19.802077  228465 ssh_runner.go:195] Run: openssl version
	I1109 14:11:19.810387  228465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9365.pem && ln -fs /usr/share/ca-certificates/9365.pem /etc/ssl/certs/9365.pem"
	I1109 14:11:19.822332  228465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9365.pem
	I1109 14:11:19.828260  228465 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:34 /usr/share/ca-certificates/9365.pem
	I1109 14:11:19.828353  228465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9365.pem
	I1109 14:11:19.882099  228465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9365.pem /etc/ssl/certs/51391683.0"
	I1109 14:11:19.893535  228465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93652.pem && ln -fs /usr/share/ca-certificates/93652.pem /etc/ssl/certs/93652.pem"
	I1109 14:11:19.904103  228465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93652.pem
	I1109 14:11:19.908965  228465 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:34 /usr/share/ca-certificates/93652.pem
	I1109 14:11:19.909012  228465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93652.pem
	I1109 14:11:19.955271  228465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93652.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:11:19.964564  228465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:11:19.974757  228465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:11:19.979353  228465 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:11:19.979403  228465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:11:20.022912  228465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:11:20.031481  228465 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:11:20.035276  228465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:11:20.071018  228465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:11:20.112023  228465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:11:20.149085  228465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:11:20.182706  228465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:11:20.215817  228465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:11:20.252170  228465 kubeadm.go:401] StartCluster: {Name:pause-092489 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-092489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry
-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:11:20.252299  228465 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:11:20.252336  228465 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:11:20.278537  228465 cri.go:89] found id: "2b5b10f4f3f84bb026e0851a309a45699c18e38c6152d977136df4d3d7ce824f"
	I1109 14:11:20.278562  228465 cri.go:89] found id: "f7663e0568a65ffaaf3c55965b13b778d4f8efa35939c24622c19d8f69d183fa"
	I1109 14:11:20.278569  228465 cri.go:89] found id: "e255085db448c22caf3ee1ab534518079a741907de0468fc851c20ed70ff553a"
	I1109 14:11:20.278573  228465 cri.go:89] found id: "d330e1ae80e3a30bfdc4b113f673bf973656704d0579d016dd61b77458e2c396"
	I1109 14:11:20.278578  228465 cri.go:89] found id: "8d4c0bf15d6f73166eefeb2a225847ad6656500266e6c4258eabfbaab89d2c19"
	I1109 14:11:20.278582  228465 cri.go:89] found id: "8d495cb1f952ddb415c3231fdd998bdecdb92da5f5d74d99731326c32330b72d"
	I1109 14:11:20.278587  228465 cri.go:89] found id: "e21ada7ed93b3e0e01d2725059c4155c569e0c4090e0e3079f02bc81d5274700"
	I1109 14:11:20.278591  228465 cri.go:89] found id: ""
	I1109 14:11:20.278624  228465 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 14:11:20.290452  228465 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:11:20Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:11:20.290518  228465 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:11:20.298218  228465 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:11:20.298236  228465 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:11:20.298274  228465 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:11:20.305471  228465 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:11:20.306130  228465 kubeconfig.go:125] found "pause-092489" server: "https://192.168.103.2:8443"
	I1109 14:11:20.307010  228465 kapi.go:59] client config for pause-092489: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/client.key", CAFile:"/home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 14:11:20.307361  228465 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1109 14:11:20.307379  228465 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1109 14:11:20.307386  228465 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1109 14:11:20.307392  228465 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1109 14:11:20.307398  228465 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1109 14:11:20.307701  228465 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:11:20.315312  228465 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1109 14:11:20.315341  228465 kubeadm.go:602] duration metric: took 17.09827ms to restartPrimaryControlPlane
	I1109 14:11:20.315351  228465 kubeadm.go:403] duration metric: took 63.187039ms to StartCluster
	I1109 14:11:20.315365  228465 settings.go:142] acquiring lock: {Name:mk4e77b290eba64589404bd2a5f48c72505ab262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:20.315429  228465 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:11:20.316791  228465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:20.317023  228465 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:11:20.317099  228465 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:11:20.317288  228465 config.go:182] Loaded profile config "pause-092489": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:11:20.319113  228465 out.go:179] * Verifying Kubernetes components...
	I1109 14:11:20.319115  228465 out.go:179] * Enabled addons: 
	I1109 14:11:20.320077  228465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:11:20.320114  228465 addons.go:515] duration metric: took 3.021449ms for enable addons: enabled=[]
	I1109 14:11:20.420564  228465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:11:20.433045  228465 node_ready.go:35] waiting up to 6m0s for node "pause-092489" to be "Ready" ...
	I1109 14:11:20.440498  228465 node_ready.go:49] node "pause-092489" is "Ready"
	I1109 14:11:20.440527  228465 node_ready.go:38] duration metric: took 7.453948ms for node "pause-092489" to be "Ready" ...
	I1109 14:11:20.440537  228465 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:11:20.440569  228465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:11:20.451690  228465 api_server.go:72] duration metric: took 134.63563ms to wait for apiserver process to appear ...
	I1109 14:11:20.451716  228465 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:11:20.451733  228465 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1109 14:11:20.455755  228465 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1109 14:11:20.456492  228465 api_server.go:141] control plane version: v1.34.1
	I1109 14:11:20.456511  228465 api_server.go:131] duration metric: took 4.789699ms to wait for apiserver health ...
	I1109 14:11:20.456518  228465 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:11:20.459676  228465 system_pods.go:59] 7 kube-system pods found
	I1109 14:11:20.459704  228465 system_pods.go:61] "coredns-66bc5c9577-z82qd" [0bab7054-1d49-4279-9e6f-62c7dd91785d] Running
	I1109 14:11:20.459711  228465 system_pods.go:61] "etcd-pause-092489" [b96ed30d-4f9d-4286-a1ce-3fbb472b684d] Running
	I1109 14:11:20.459719  228465 system_pods.go:61] "kindnet-h2j52" [61515b37-d564-420e-b3b9-9814a711b0f4] Running
	I1109 14:11:20.459727  228465 system_pods.go:61] "kube-apiserver-pause-092489" [783c57f0-2ba9-45dd-8f73-66ff35cc8a4e] Running
	I1109 14:11:20.459730  228465 system_pods.go:61] "kube-controller-manager-pause-092489" [c0464d94-b6aa-412a-91fa-76112d2b375d] Running
	I1109 14:11:20.459736  228465 system_pods.go:61] "kube-proxy-j62h5" [d33cd6cb-b566-4fe8-81c8-13a78abcf6c0] Running
	I1109 14:11:20.459739  228465 system_pods.go:61] "kube-scheduler-pause-092489" [8889c9f6-9e92-4287-b1c1-abeb0c5048ba] Running
	I1109 14:11:20.459748  228465 system_pods.go:74] duration metric: took 3.225072ms to wait for pod list to return data ...
	I1109 14:11:20.459759  228465 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:11:20.461523  228465 default_sa.go:45] found service account: "default"
	I1109 14:11:20.461538  228465 default_sa.go:55] duration metric: took 1.771013ms for default service account to be created ...
	I1109 14:11:20.461545  228465 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:11:20.463861  228465 system_pods.go:86] 7 kube-system pods found
	I1109 14:11:20.463880  228465 system_pods.go:89] "coredns-66bc5c9577-z82qd" [0bab7054-1d49-4279-9e6f-62c7dd91785d] Running
	I1109 14:11:20.463884  228465 system_pods.go:89] "etcd-pause-092489" [b96ed30d-4f9d-4286-a1ce-3fbb472b684d] Running
	I1109 14:11:20.463888  228465 system_pods.go:89] "kindnet-h2j52" [61515b37-d564-420e-b3b9-9814a711b0f4] Running
	I1109 14:11:20.463891  228465 system_pods.go:89] "kube-apiserver-pause-092489" [783c57f0-2ba9-45dd-8f73-66ff35cc8a4e] Running
	I1109 14:11:20.463894  228465 system_pods.go:89] "kube-controller-manager-pause-092489" [c0464d94-b6aa-412a-91fa-76112d2b375d] Running
	I1109 14:11:20.463898  228465 system_pods.go:89] "kube-proxy-j62h5" [d33cd6cb-b566-4fe8-81c8-13a78abcf6c0] Running
	I1109 14:11:20.463901  228465 system_pods.go:89] "kube-scheduler-pause-092489" [8889c9f6-9e92-4287-b1c1-abeb0c5048ba] Running
	I1109 14:11:20.463906  228465 system_pods.go:126] duration metric: took 2.356919ms to wait for k8s-apps to be running ...
	I1109 14:11:20.463915  228465 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:11:20.463944  228465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:11:20.475573  228465 system_svc.go:56] duration metric: took 11.653869ms WaitForService to wait for kubelet
	I1109 14:11:20.475594  228465 kubeadm.go:587] duration metric: took 158.541103ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:11:20.475617  228465 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:11:20.477452  228465 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:11:20.477471  228465 node_conditions.go:123] node cpu capacity is 8
	I1109 14:11:20.477481  228465 node_conditions.go:105] duration metric: took 1.859163ms to run NodePressure ...
	I1109 14:11:20.477490  228465 start.go:242] waiting for startup goroutines ...
	I1109 14:11:20.477497  228465 start.go:247] waiting for cluster config update ...
	I1109 14:11:20.477503  228465 start.go:256] writing updated cluster config ...
	I1109 14:11:20.477786  228465 ssh_runner.go:195] Run: rm -f paused
	I1109 14:11:20.481118  228465 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:11:20.481776  228465 kapi.go:59] client config for pause-092489: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/client.key", CAFile:"/home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 14:11:20.483722  228465 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z82qd" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:20.487268  228465 pod_ready.go:94] pod "coredns-66bc5c9577-z82qd" is "Ready"
	I1109 14:11:20.487285  228465 pod_ready.go:86] duration metric: took 3.543159ms for pod "coredns-66bc5c9577-z82qd" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:20.488866  228465 pod_ready.go:83] waiting for pod "etcd-pause-092489" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:20.492081  228465 pod_ready.go:94] pod "etcd-pause-092489" is "Ready"
	I1109 14:11:20.492099  228465 pod_ready.go:86] duration metric: took 3.218512ms for pod "etcd-pause-092489" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:20.493749  228465 pod_ready.go:83] waiting for pod "kube-apiserver-pause-092489" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:20.496968  228465 pod_ready.go:94] pod "kube-apiserver-pause-092489" is "Ready"
	I1109 14:11:20.496984  228465 pod_ready.go:86] duration metric: took 3.217579ms for pod "kube-apiserver-pause-092489" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:20.498608  228465 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-092489" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:20.885264  228465 pod_ready.go:94] pod "kube-controller-manager-pause-092489" is "Ready"
	I1109 14:11:20.885292  228465 pod_ready.go:86] duration metric: took 386.667017ms for pod "kube-controller-manager-pause-092489" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:21.085178  228465 pod_ready.go:83] waiting for pod "kube-proxy-j62h5" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:21.485433  228465 pod_ready.go:94] pod "kube-proxy-j62h5" is "Ready"
	I1109 14:11:21.485458  228465 pod_ready.go:86] duration metric: took 400.259087ms for pod "kube-proxy-j62h5" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:21.685552  228465 pod_ready.go:83] waiting for pod "kube-scheduler-pause-092489" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:22.085178  228465 pod_ready.go:94] pod "kube-scheduler-pause-092489" is "Ready"
	I1109 14:11:22.085202  228465 pod_ready.go:86] duration metric: took 399.627478ms for pod "kube-scheduler-pause-092489" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:22.085212  228465 pod_ready.go:40] duration metric: took 1.60406283s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:11:22.129110  228465 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:11:22.130653  228465 out.go:179] * Done! kubectl is now configured to use "pause-092489" cluster and "default" namespace by default
	I1109 14:11:19.453020  228825 cli_runner.go:164] Run: docker container inspect old-k8s-version-169816 --format={{.State.Status}}
	I1109 14:11:19.474715  228825 cli_runner.go:164] Run: docker exec old-k8s-version-169816 stat /var/lib/dpkg/alternatives/iptables
	I1109 14:11:19.522662  228825 oci.go:144] the created container "old-k8s-version-169816" has a running status.
	I1109 14:11:19.522697  228825 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/old-k8s-version-169816/id_rsa...
	I1109 14:11:19.783094  228825 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-5854/.minikube/machines/old-k8s-version-169816/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 14:11:19.813444  228825 cli_runner.go:164] Run: docker container inspect old-k8s-version-169816 --format={{.State.Status}}
	I1109 14:11:19.838420  228825 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 14:11:19.838442  228825 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-169816 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 14:11:19.889977  228825 cli_runner.go:164] Run: docker container inspect old-k8s-version-169816 --format={{.State.Status}}
	I1109 14:11:19.911392  228825 machine.go:94] provisionDockerMachine start ...
	I1109 14:11:19.911484  228825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:11:19.932091  228825 main.go:143] libmachine: Using SSH client type: native
	I1109 14:11:19.932454  228825 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33055 <nil> <nil>}
	I1109 14:11:19.932479  228825 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:11:20.067382  228825 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-169816
	
	I1109 14:11:20.067409  228825 ubuntu.go:182] provisioning hostname "old-k8s-version-169816"
	I1109 14:11:20.067467  228825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:11:20.085885  228825 main.go:143] libmachine: Using SSH client type: native
	I1109 14:11:20.086151  228825 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33055 <nil> <nil>}
	I1109 14:11:20.086169  228825 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-169816 && echo "old-k8s-version-169816" | sudo tee /etc/hostname
	I1109 14:11:20.223416  228825 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-169816
	
	I1109 14:11:20.223496  228825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:11:20.240670  228825 main.go:143] libmachine: Using SSH client type: native
	I1109 14:11:20.240881  228825 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33055 <nil> <nil>}
	I1109 14:11:20.240903  228825 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-169816' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-169816/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-169816' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:11:20.367782  228825 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:11:20.367816  228825 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-5854/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-5854/.minikube}
	I1109 14:11:20.367838  228825 ubuntu.go:190] setting up certificates
	I1109 14:11:20.367858  228825 provision.go:84] configureAuth start
	I1109 14:11:20.367911  228825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-169816
	I1109 14:11:20.386316  228825 provision.go:143] copyHostCerts
	I1109 14:11:20.386370  228825 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem, removing ...
	I1109 14:11:20.386380  228825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem
	I1109 14:11:20.386446  228825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem (1123 bytes)
	I1109 14:11:20.386534  228825 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem, removing ...
	I1109 14:11:20.386542  228825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem
	I1109 14:11:20.386570  228825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem (1679 bytes)
	I1109 14:11:20.386627  228825 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem, removing ...
	I1109 14:11:20.386649  228825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem
	I1109 14:11:20.386692  228825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem (1078 bytes)
	I1109 14:11:20.386751  228825 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-169816 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-169816]
	I1109 14:11:20.542962  228825 provision.go:177] copyRemoteCerts
	I1109 14:11:20.543012  228825 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:11:20.543053  228825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:11:20.560555  228825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/old-k8s-version-169816/id_rsa Username:docker}
	I1109 14:11:20.652227  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 14:11:20.670546  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1109 14:11:20.687030  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:11:20.703111  228825 provision.go:87] duration metric: took 335.237857ms to configureAuth
	I1109 14:11:20.703131  228825 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:11:20.703294  228825 config.go:182] Loaded profile config "old-k8s-version-169816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1109 14:11:20.703390  228825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:11:20.721164  228825 main.go:143] libmachine: Using SSH client type: native
	I1109 14:11:20.721364  228825 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33055 <nil> <nil>}
	I1109 14:11:20.721386  228825 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:11:20.955779  228825 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:11:20.955799  228825 machine.go:97] duration metric: took 1.044378291s to provisionDockerMachine
	I1109 14:11:20.955809  228825 client.go:176] duration metric: took 6.336010633s to LocalClient.Create
	I1109 14:11:20.955825  228825 start.go:167] duration metric: took 6.336066137s to libmachine.API.Create "old-k8s-version-169816"
	I1109 14:11:20.955833  228825 start.go:293] postStartSetup for "old-k8s-version-169816" (driver="docker")
	I1109 14:11:20.955845  228825 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:11:20.955910  228825 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:11:20.955948  228825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:11:20.973812  228825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/old-k8s-version-169816/id_rsa Username:docker}
	I1109 14:11:21.066540  228825 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:11:21.069723  228825 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:11:21.069748  228825 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:11:21.069757  228825 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/addons for local assets ...
	I1109 14:11:21.069796  228825 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/files for local assets ...
	I1109 14:11:21.069874  228825 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem -> 93652.pem in /etc/ssl/certs
	I1109 14:11:21.069968  228825 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:11:21.077358  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:11:21.096272  228825 start.go:296] duration metric: took 140.427591ms for postStartSetup
	I1109 14:11:21.096622  228825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-169816
	I1109 14:11:21.114617  228825 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/config.json ...
	I1109 14:11:21.114877  228825 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:11:21.114919  228825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:11:21.131455  228825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/old-k8s-version-169816/id_rsa Username:docker}
	I1109 14:11:21.220238  228825 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:11:21.224546  228825 start.go:128] duration metric: took 6.606451463s to createHost
	I1109 14:11:21.224567  228825 start.go:83] releasing machines lock for "old-k8s-version-169816", held for 6.606584094s
	I1109 14:11:21.224633  228825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-169816
	I1109 14:11:21.241784  228825 ssh_runner.go:195] Run: cat /version.json
	I1109 14:11:21.241835  228825 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:11:21.241849  228825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:11:21.241905  228825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:11:21.259501  228825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/old-k8s-version-169816/id_rsa Username:docker}
	I1109 14:11:21.260089  228825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/old-k8s-version-169816/id_rsa Username:docker}
	I1109 14:11:21.348028  228825 ssh_runner.go:195] Run: systemctl --version
	I1109 14:11:21.401085  228825 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:11:21.433714  228825 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:11:21.438118  228825 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:11:21.438169  228825 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:11:21.462633  228825 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1109 14:11:21.462684  228825 start.go:496] detecting cgroup driver to use...
	I1109 14:11:21.462714  228825 detect.go:190] detected "systemd" cgroup driver on host os
	I1109 14:11:21.462762  228825 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:11:21.477467  228825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:11:21.489210  228825 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:11:21.489267  228825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:11:21.504428  228825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:11:21.521805  228825 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:11:21.602291  228825 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:11:21.684742  228825 docker.go:234] disabling docker service ...
	I1109 14:11:21.684811  228825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:11:21.703355  228825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:11:21.714710  228825 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:11:21.793855  228825 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:11:21.877841  228825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:11:21.889592  228825 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:11:21.903077  228825 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1109 14:11:21.903137  228825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:21.912675  228825 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1109 14:11:21.912729  228825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:21.920886  228825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:21.928752  228825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:21.936721  228825 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:11:21.944903  228825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:21.952675  228825 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:21.964888  228825 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:21.973573  228825 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:11:21.980280  228825 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:11:21.987179  228825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:11:22.065034  228825 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:11:22.177474  228825 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:11:22.177537  228825 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:11:22.181340  228825 start.go:564] Will wait 60s for crictl version
	I1109 14:11:22.181392  228825 ssh_runner.go:195] Run: which crictl
	I1109 14:11:22.185037  228825 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:11:22.211718  228825 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:11:22.211791  228825 ssh_runner.go:195] Run: crio --version
	I1109 14:11:22.243562  228825 ssh_runner.go:195] Run: crio --version
	I1109 14:11:22.280768  228825 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1109 14:11:22.281917  228825 cli_runner.go:164] Run: docker network inspect old-k8s-version-169816 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:11:22.300791  228825 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1109 14:11:22.304843  228825 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:11:22.315419  228825 kubeadm.go:884] updating cluster {Name:old-k8s-version-169816 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-169816 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:11:22.315591  228825 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1109 14:11:22.315677  228825 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:11:22.347577  228825 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:11:22.347600  228825 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:11:22.347682  228825 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:11:22.371879  228825 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:11:22.371900  228825 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:11:22.371908  228825 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1109 14:11:22.372003  228825 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-169816 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-169816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:11:22.372083  228825 ssh_runner.go:195] Run: crio config
	I1109 14:11:22.416490  228825 cni.go:84] Creating CNI manager for ""
	I1109 14:11:22.416514  228825 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:11:22.416533  228825 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:11:22.416563  228825 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-169816 NodeName:old-k8s-version-169816 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:11:22.416754  228825 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-169816"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:11:22.416830  228825 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1109 14:11:22.424736  228825 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:11:22.424785  228825 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:11:22.432440  228825 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1109 14:11:22.444727  228825 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:11:22.462611  228825 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1109 14:11:22.476046  228825 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:11:22.479436  228825 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:11:22.489064  228825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:11:22.575301  228825 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:11:22.600211  228825 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816 for IP: 192.168.76.2
	I1109 14:11:22.600230  228825 certs.go:195] generating shared ca certs ...
	I1109 14:11:22.600248  228825 certs.go:227] acquiring lock for ca certs: {Name:mk1fbc7fb5aaf0e87090d75145f91c095ef07289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:22.600406  228825 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key
	I1109 14:11:22.600462  228825 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key
	I1109 14:11:22.600475  228825 certs.go:257] generating profile certs ...
	I1109 14:11:22.600540  228825 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.key
	I1109 14:11:22.600564  228825 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.crt with IP's: []
	I1109 14:11:23.031287  228825 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.crt ...
	I1109 14:11:23.031317  228825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.crt: {Name:mkcd9ed6dc69ce6a3d0b73e16bb6024020ba4fb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:23.031505  228825 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.key ...
	I1109 14:11:23.031522  228825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.key: {Name:mk5c2a6e8cf42bd3a0054b0d8d5450a14bdd8065 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:23.031633  228825 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.key.69cbadc6
	I1109 14:11:23.031668  228825 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.crt.69cbadc6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1109 14:11:23.378927  228825 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.crt.69cbadc6 ...
	I1109 14:11:23.378952  228825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.crt.69cbadc6: {Name:mkf01539571826156d06efee737dcce465207aa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:23.379094  228825 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.key.69cbadc6 ...
	I1109 14:11:23.379107  228825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.key.69cbadc6: {Name:mk5ebb8e8122a7b613d07eb43f310370bb8be779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:23.379181  228825 certs.go:382] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.crt.69cbadc6 -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.crt
	I1109 14:11:23.379250  228825 certs.go:386] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.key.69cbadc6 -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.key
	I1109 14:11:23.379302  228825 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/proxy-client.key
	I1109 14:11:23.379316  228825 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/proxy-client.crt with IP's: []
	I1109 14:11:23.411058  228825 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/proxy-client.crt ...
	I1109 14:11:23.411077  228825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/proxy-client.crt: {Name:mkbb13b175ee428d211c3094d183405bf8266158 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:23.411195  228825 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/proxy-client.key ...
	I1109 14:11:23.411207  228825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/proxy-client.key: {Name:mkc7ce65443711fdf9dfcd4d8a8a1af4c8a0c611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:23.411366  228825 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem (1338 bytes)
	W1109 14:11:23.411397  228825 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365_empty.pem, impossibly tiny 0 bytes
	I1109 14:11:23.411406  228825 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:11:23.411425  228825 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 14:11:23.411449  228825 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:11:23.411488  228825 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 14:11:23.411552  228825 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:11:23.412076  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:11:23.429774  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:11:23.446594  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:11:23.463608  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:11:23.480337  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1109 14:11:23.497892  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:11:23.515270  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:11:23.531595  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:11:23.547735  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:11:23.565239  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem --> /usr/share/ca-certificates/9365.pem (1338 bytes)
	I1109 14:11:23.580946  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /usr/share/ca-certificates/93652.pem (1708 bytes)
	I1109 14:11:23.597501  228825 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:11:23.609487  228825 ssh_runner.go:195] Run: openssl version
	I1109 14:11:23.615587  228825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93652.pem && ln -fs /usr/share/ca-certificates/93652.pem /etc/ssl/certs/93652.pem"
	I1109 14:11:23.623410  228825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93652.pem
	I1109 14:11:23.627912  228825 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:34 /usr/share/ca-certificates/93652.pem
	I1109 14:11:23.627972  228825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93652.pem
	I1109 14:11:23.665375  228825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93652.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:11:23.674009  228825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:11:23.682040  228825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:11:23.685589  228825 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:11:23.685649  228825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:11:23.719997  228825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:11:23.728856  228825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9365.pem && ln -fs /usr/share/ca-certificates/9365.pem /etc/ssl/certs/9365.pem"
	I1109 14:11:23.737617  228825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9365.pem
	I1109 14:11:23.741666  228825 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:34 /usr/share/ca-certificates/9365.pem
	I1109 14:11:23.741707  228825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9365.pem
	I1109 14:11:23.778705  228825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9365.pem /etc/ssl/certs/51391683.0"
	I1109 14:11:23.786902  228825 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:11:23.790285  228825 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:11:23.790332  228825 kubeadm.go:401] StartCluster: {Name:old-k8s-version-169816 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-169816 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:11:23.790411  228825 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:11:23.790471  228825 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:11:23.816796  228825 cri.go:89] found id: ""
	I1109 14:11:23.816853  228825 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:11:23.824246  228825 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:11:23.831729  228825 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:11:23.831768  228825 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:11:23.838867  228825 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:11:23.838883  228825 kubeadm.go:158] found existing configuration files:
	
	I1109 14:11:23.838921  228825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 14:11:23.846448  228825 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:11:23.846496  228825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:11:23.853883  228825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 14:11:23.860835  228825 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:11:23.860868  228825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:11:23.867683  228825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 14:11:23.874822  228825 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:11:23.874864  228825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:11:23.881489  228825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 14:11:23.888427  228825 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:11:23.888463  228825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:11:23.895121  228825 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:11:23.936477  228825 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1109 14:11:23.936554  228825 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:11:23.970427  228825 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 14:11:23.970521  228825 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1109 14:11:23.970567  228825 kubeadm.go:319] OS: Linux
	I1109 14:11:23.970656  228825 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 14:11:23.970716  228825 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 14:11:23.970783  228825 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 14:11:23.970853  228825 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 14:11:23.970922  228825 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 14:11:23.971016  228825 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 14:11:23.971107  228825 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 14:11:23.971175  228825 kubeadm.go:319] CGROUPS_IO: enabled
	I1109 14:11:24.037661  228825 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:11:24.037853  228825 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:11:24.038006  228825 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1109 14:11:24.170206  228825 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:11:19.336004  188127 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1109 14:11:19.336371  188127 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1109 14:11:19.336433  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1109 14:11:19.336490  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1109 14:11:19.365900  188127 cri.go:89] found id: "278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:19.365920  188127 cri.go:89] found id: ""
	I1109 14:11:19.365941  188127 logs.go:282] 1 containers: [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689]
	I1109 14:11:19.366005  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:19.369924  188127 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1109 14:11:19.369999  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1109 14:11:19.399890  188127 cri.go:89] found id: ""
	I1109 14:11:19.399959  188127 logs.go:282] 0 containers: []
	W1109 14:11:19.399977  188127 logs.go:284] No container was found matching "etcd"
	I1109 14:11:19.399985  188127 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1109 14:11:19.400041  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1109 14:11:19.429025  188127 cri.go:89] found id: ""
	I1109 14:11:19.429053  188127 logs.go:282] 0 containers: []
	W1109 14:11:19.429064  188127 logs.go:284] No container was found matching "coredns"
	I1109 14:11:19.429072  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1109 14:11:19.429127  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1109 14:11:19.463744  188127 cri.go:89] found id: "5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:19.463767  188127 cri.go:89] found id: ""
	I1109 14:11:19.463777  188127 logs.go:282] 1 containers: [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82]
	I1109 14:11:19.463831  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:19.468618  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1109 14:11:19.468707  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1109 14:11:19.500006  188127 cri.go:89] found id: ""
	I1109 14:11:19.500034  188127 logs.go:282] 0 containers: []
	W1109 14:11:19.500047  188127 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:11:19.500055  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1109 14:11:19.500122  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1109 14:11:19.529521  188127 cri.go:89] found id: "b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:19.529567  188127 cri.go:89] found id: ""
	I1109 14:11:19.529578  188127 logs.go:282] 1 containers: [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd]
	I1109 14:11:19.529659  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:19.533777  188127 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1109 14:11:19.533890  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1109 14:11:19.566677  188127 cri.go:89] found id: ""
	I1109 14:11:19.566703  188127 logs.go:282] 0 containers: []
	W1109 14:11:19.566712  188127 logs.go:284] No container was found matching "kindnet"
	I1109 14:11:19.566719  188127 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1109 14:11:19.566771  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1109 14:11:19.599269  188127 cri.go:89] found id: ""
	I1109 14:11:19.599294  188127 logs.go:282] 0 containers: []
	W1109 14:11:19.599304  188127 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:11:19.599315  188127 logs.go:123] Gathering logs for container status ...
	I1109 14:11:19.599334  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:11:19.630359  188127 logs.go:123] Gathering logs for kubelet ...
	I1109 14:11:19.630391  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:11:19.751209  188127 logs.go:123] Gathering logs for dmesg ...
	I1109 14:11:19.751243  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:11:19.768952  188127 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:11:19.768979  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:11:19.846808  188127 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:11:19.846830  188127 logs.go:123] Gathering logs for kube-apiserver [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689] ...
	I1109 14:11:19.846847  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:19.887373  188127 logs.go:123] Gathering logs for kube-scheduler [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82] ...
	I1109 14:11:19.887407  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:19.947158  188127 logs.go:123] Gathering logs for kube-controller-manager [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd] ...
	I1109 14:11:19.947184  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:19.977403  188127 logs.go:123] Gathering logs for CRI-O ...
	I1109 14:11:19.977428  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1109 14:11:22.531728  188127 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1109 14:11:22.532135  188127 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1109 14:11:22.532195  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1109 14:11:22.532252  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1109 14:11:22.559400  188127 cri.go:89] found id: "278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:22.559420  188127 cri.go:89] found id: ""
	I1109 14:11:22.559429  188127 logs.go:282] 1 containers: [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689]
	I1109 14:11:22.559483  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:22.563178  188127 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1109 14:11:22.563235  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1109 14:11:22.592509  188127 cri.go:89] found id: ""
	I1109 14:11:22.592533  188127 logs.go:282] 0 containers: []
	W1109 14:11:22.592543  188127 logs.go:284] No container was found matching "etcd"
	I1109 14:11:22.592550  188127 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1109 14:11:22.592595  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1109 14:11:22.622102  188127 cri.go:89] found id: ""
	I1109 14:11:22.622132  188127 logs.go:282] 0 containers: []
	W1109 14:11:22.622142  188127 logs.go:284] No container was found matching "coredns"
	I1109 14:11:22.622149  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1109 14:11:22.622203  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1109 14:11:22.656782  188127 cri.go:89] found id: "5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:22.656812  188127 cri.go:89] found id: ""
	I1109 14:11:22.656820  188127 logs.go:282] 1 containers: [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82]
	I1109 14:11:22.656872  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:22.660757  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1109 14:11:22.660809  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1109 14:11:22.687636  188127 cri.go:89] found id: ""
	I1109 14:11:22.687683  188127 logs.go:282] 0 containers: []
	W1109 14:11:22.687693  188127 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:11:22.687700  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1109 14:11:22.687756  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1109 14:11:22.712052  188127 cri.go:89] found id: "b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:22.712072  188127 cri.go:89] found id: ""
	I1109 14:11:22.712082  188127 logs.go:282] 1 containers: [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd]
	I1109 14:11:22.712130  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:22.715751  188127 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1109 14:11:22.715817  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1109 14:11:22.742570  188127 cri.go:89] found id: ""
	I1109 14:11:22.742590  188127 logs.go:282] 0 containers: []
	W1109 14:11:22.742598  188127 logs.go:284] No container was found matching "kindnet"
	I1109 14:11:22.742604  188127 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1109 14:11:22.742668  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1109 14:11:22.767247  188127 cri.go:89] found id: ""
	I1109 14:11:22.767272  188127 logs.go:282] 0 containers: []
	W1109 14:11:22.767281  188127 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:11:22.767291  188127 logs.go:123] Gathering logs for kube-controller-manager [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd] ...
	I1109 14:11:22.767304  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:22.791283  188127 logs.go:123] Gathering logs for CRI-O ...
	I1109 14:11:22.791309  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1109 14:11:22.850917  188127 logs.go:123] Gathering logs for container status ...
	I1109 14:11:22.850940  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:11:22.886988  188127 logs.go:123] Gathering logs for kubelet ...
	I1109 14:11:22.887015  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:11:22.982416  188127 logs.go:123] Gathering logs for dmesg ...
	I1109 14:11:22.982445  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:11:22.998366  188127 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:11:22.998392  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:11:23.066171  188127 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:11:23.066189  188127 logs.go:123] Gathering logs for kube-apiserver [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689] ...
	I1109 14:11:23.066202  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:23.097123  188127 logs.go:123] Gathering logs for kube-scheduler [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82] ...
	I1109 14:11:23.097150  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:24.173112  228825 out.go:252]   - Generating certificates and keys ...
	I1109 14:11:24.173183  228825 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:11:24.173278  228825 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	
	
	==> CRI-O <==
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.038698392Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.039560664Z" level=info msg="Conmon does support the --sync option"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.03957666Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.03959297Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.040527321Z" level=info msg="Conmon does support the --sync option"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.040543103Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.044546951Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.044573415Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.045278821Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.04565578Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.04571246Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.051086191Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.094374872Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-z82qd Namespace:kube-system ID:93c0a608a4ae12c35a0136500527c1034979b4c1cbfe35c62df719a055f3d559 UID:0bab7054-1d49-4279-9e6f-62c7dd91785d NetNS:/var/run/netns/d74dafa6-4d44-42ae-aaae-19938ef0f444 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00088a0e8}] Aliases:map[]}"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.094677998Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-z82qd for CNI network kindnet (type=ptp)"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.095212033Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.095237716Z" level=info msg="Starting seccomp notifier watcher"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.095478801Z" level=info msg="Create NRI interface"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.095609323Z" level=info msg="built-in NRI default validator is disabled"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.095620118Z" level=info msg="runtime interface created"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.09563396Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.095668686Z" level=info msg="runtime interface starting up..."
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.095676696Z" level=info msg="starting plugins..."
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.09569225Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.096070384Z" level=info msg="No systemd watchdog enabled"
	Nov 09 14:11:19 pause-092489 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	2b5b10f4f3f84       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   0                   93c0a608a4ae1       coredns-66bc5c9577-z82qd               kube-system
	f7663e0568a65       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   25 seconds ago      Running             kindnet-cni               0                   a0069e0c26e35       kindnet-h2j52                          kube-system
	e255085db448c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   25 seconds ago      Running             kube-proxy                0                   e0054db1a3e8e       kube-proxy-j62h5                       kube-system
	d330e1ae80e3a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   35 seconds ago      Running             kube-controller-manager   0                   b37fe04782f20       kube-controller-manager-pause-092489   kube-system
	8d4c0bf15d6f7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   35 seconds ago      Running             etcd                      0                   8645ee183cd07       etcd-pause-092489                      kube-system
	8d495cb1f952d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   35 seconds ago      Running             kube-scheduler            0                   181a7f639cb69       kube-scheduler-pause-092489            kube-system
	e21ada7ed93b3       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   35 seconds ago      Running             kube-apiserver            0                   c8dc402bb598d       kube-apiserver-pause-092489            kube-system
	
	
	==> coredns [2b5b10f4f3f84bb026e0851a309a45699c18e38c6152d977136df4d3d7ce824f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37501 - 34408 "HINFO IN 6770464144348760453.4471584213420131731. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.86208948s
	
	
	==> describe nodes <==
	Name:               pause-092489
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-092489
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=pause-092489
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_10_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:10:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-092489
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:11:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:11:10 +0000   Sun, 09 Nov 2025 14:10:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:11:10 +0000   Sun, 09 Nov 2025 14:10:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:11:10 +0000   Sun, 09 Nov 2025 14:10:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:11:10 +0000   Sun, 09 Nov 2025 14:11:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-092489
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                8ee20d3d-21db-4a7a-b9a3-995feff3a0bf
	  Boot ID:                    f61b57f8-a461-4dd6-b921-37a11bd88f0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-z82qd                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-pause-092489                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-h2j52                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-pause-092489             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-pause-092489    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-j62h5                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-pause-092489             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  NodeHasSufficientMemory  36s (x8 over 37s)  kubelet          Node pause-092489 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 37s)  kubelet          Node pause-092489 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 37s)  kubelet          Node pause-092489 status is now: NodeHasSufficientPID
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s                kubelet          Node pause-092489 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s                kubelet          Node pause-092489 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s                kubelet          Node pause-092489 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node pause-092489 event: Registered Node pause-092489 in Controller
	  Normal  NodeReady                15s                kubelet          Node pause-092489 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.090313] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.571631] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 9 13:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.019189] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023897] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +2.047759] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +4.031604] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +8.447123] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[ +16.382306] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 13:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	
	
	==> etcd [8d4c0bf15d6f73166eefeb2a225847ad6656500266e6c4258eabfbaab89d2c19] <==
	{"level":"warn","ts":"2025-11-09T14:10:54.900835Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"255.399031ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-cidrs-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T14:10:54.900900Z","caller":"traceutil/trace.go:172","msg":"trace[1240050507] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-cidrs-controller; range_end:; response_count:0; response_revision:294; }","duration":"255.477224ms","start":"2025-11-09T14:10:54.645404Z","end":"2025-11-09T14:10:54.900881Z","steps":["trace[1240050507] 'agreement among raft nodes before linearized reading'  (duration: 127.607031ms)","trace[1240050507] 'range keys from in-memory index tree'  (duration: 127.75586ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:10:54.901270Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.948065ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789902661887140 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/kubeadm:node-proxier\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/kubeadm:node-proxier\" value_size:362 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-09T14:10:54.901352Z","caller":"traceutil/trace.go:172","msg":"trace[1112341552] transaction","detail":"{read_only:false; response_revision:295; number_of_response:1; }","duration":"256.425224ms","start":"2025-11-09T14:10:54.644906Z","end":"2025-11-09T14:10:54.901331Z","steps":["trace[1112341552] 'process raft request'  (duration: 128.041274ms)","trace[1112341552] 'compare'  (duration: 127.848172ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:10:54.901452Z","caller":"traceutil/trace.go:172","msg":"trace[1794827297] transaction","detail":"{read_only:false; response_revision:296; number_of_response:1; }","duration":"187.165732ms","start":"2025-11-09T14:10:54.714227Z","end":"2025-11-09T14:10:54.901393Z","steps":["trace[1794827297] 'process raft request'  (duration: 187.107327ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:10:55.031234Z","caller":"traceutil/trace.go:172","msg":"trace[1177562587] linearizableReadLoop","detail":"{readStateIndex:303; appliedIndex:303; }","duration":"128.20273ms","start":"2025-11-09T14:10:54.903011Z","end":"2025-11-09T14:10:55.031214Z","steps":["trace[1177562587] 'read index received'  (duration: 128.190858ms)","trace[1177562587] 'applied index is now lower than readState.Index'  (duration: 10.229µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:10:55.086019Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.981051ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-11-09T14:10:55.086060Z","caller":"traceutil/trace.go:172","msg":"trace[181611840] transaction","detail":"{read_only:false; number_of_response:0; response_revision:296; }","duration":"238.706431ms","start":"2025-11-09T14:10:54.847350Z","end":"2025-11-09T14:10:55.086056Z","steps":["trace[181611840] 'process raft request'  (duration: 238.623412ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:10:55.086085Z","caller":"traceutil/trace.go:172","msg":"trace[1175370108] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:296; }","duration":"183.063266ms","start":"2025-11-09T14:10:54.903006Z","end":"2025-11-09T14:10:55.086069Z","steps":["trace[1175370108] 'agreement among raft nodes before linearized reading'  (duration: 128.285332ms)","trace[1175370108] 'range keys from in-memory index tree'  (duration: 54.594653ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:10:55.086075Z","caller":"traceutil/trace.go:172","msg":"trace[941473842] transaction","detail":"{read_only:false; number_of_response:0; response_revision:296; }","duration":"238.785858ms","start":"2025-11-09T14:10:54.847261Z","end":"2025-11-09T14:10:55.086047Z","steps":["trace[941473842] 'process raft request'  (duration: 184.038655ms)","trace[941473842] 'compare'  (duration: 54.625983ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:10:55.086027Z","caller":"traceutil/trace.go:172","msg":"trace[311800633] transaction","detail":"{read_only:false; number_of_response:0; response_revision:296; }","duration":"238.652712ms","start":"2025-11-09T14:10:54.847363Z","end":"2025-11-09T14:10:55.086016Z","steps":["trace[311800633] 'process raft request'  (duration: 238.634412ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:10:55.151281Z","caller":"traceutil/trace.go:172","msg":"trace[1761511854] transaction","detail":"{read_only:false; response_revision:298; number_of_response:1; }","duration":"236.179117ms","start":"2025-11-09T14:10:54.915092Z","end":"2025-11-09T14:10:55.151271Z","steps":["trace[1761511854] 'process raft request'  (duration: 236.13816ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:10:55.151314Z","caller":"traceutil/trace.go:172","msg":"trace[1898559928] transaction","detail":"{read_only:false; response_revision:297; number_of_response:1; }","duration":"247.363614ms","start":"2025-11-09T14:10:54.903933Z","end":"2025-11-09T14:10:55.151297Z","steps":["trace[1898559928] 'process raft request'  (duration: 247.219368ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:10:55.383587Z","caller":"traceutil/trace.go:172","msg":"trace[716513403] linearizableReadLoop","detail":"{readStateIndex:311; appliedIndex:311; }","duration":"124.841082ms","start":"2025-11-09T14:10:55.258728Z","end":"2025-11-09T14:10:55.383569Z","steps":["trace[716513403] 'read index received'  (duration: 124.835682ms)","trace[716513403] 'applied index is now lower than readState.Index'  (duration: 4.699µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:10:55.515748Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"256.997924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" limit:1 ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2025-11-09T14:10:55.515803Z","caller":"traceutil/trace.go:172","msg":"trace[1444622275] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/generic-garbage-collector; range_end:; response_count:1; response_revision:301; }","duration":"257.070167ms","start":"2025-11-09T14:10:55.258719Z","end":"2025-11-09T14:10:55.515789Z","steps":["trace[1444622275] 'agreement among raft nodes before linearized reading'  (duration: 124.942372ms)","trace[1444622275] 'range keys from in-memory index tree'  (duration: 132.012742ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:10:55.515998Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.134417ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789902661887158 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-092489\" mod_revision:276 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-092489\" value_size:7412 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-092489\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-09T14:10:55.516068Z","caller":"traceutil/trace.go:172","msg":"trace[636778214] transaction","detail":"{read_only:false; response_revision:302; number_of_response:1; }","duration":"259.194564ms","start":"2025-11-09T14:10:55.256861Z","end":"2025-11-09T14:10:55.516056Z","steps":["trace[636778214] 'process raft request'  (duration: 126.802933ms)","trace[636778214] 'compare'  (duration: 132.051689ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:10:55.655201Z","caller":"traceutil/trace.go:172","msg":"trace[1473739482] linearizableReadLoop","detail":"{readStateIndex:313; appliedIndex:313; }","duration":"117.033393ms","start":"2025-11-09T14:10:55.538150Z","end":"2025-11-09T14:10:55.655184Z","steps":["trace[1473739482] 'read index received'  (duration: 117.028118ms)","trace[1473739482] 'applied index is now lower than readState.Index'  (duration: 4.405µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:10:55.655312Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.141788ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/disruption-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T14:10:55.655344Z","caller":"traceutil/trace.go:172","msg":"trace[1773379853] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/disruption-controller; range_end:; response_count:0; response_revision:303; }","duration":"117.19192ms","start":"2025-11-09T14:10:55.538143Z","end":"2025-11-09T14:10:55.655335Z","steps":["trace[1773379853] 'agreement among raft nodes before linearized reading'  (duration: 117.103442ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:10:55.655395Z","caller":"traceutil/trace.go:172","msg":"trace[1620174108] transaction","detail":"{read_only:false; response_revision:304; number_of_response:1; }","duration":"132.585952ms","start":"2025-11-09T14:10:55.522797Z","end":"2025-11-09T14:10:55.655383Z","steps":["trace[1620174108] 'process raft request'  (duration: 132.449248ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:10:55.936924Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.597114ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789902661887171 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/disruption-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/disruption-controller\" value_size:126 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-09T14:10:55.937017Z","caller":"traceutil/trace.go:172","msg":"trace[544720244] transaction","detail":"{read_only:false; response_revision:305; number_of_response:1; }","duration":"277.072121ms","start":"2025-11-09T14:10:55.659929Z","end":"2025-11-09T14:10:55.937002Z","steps":["trace[544720244] 'process raft request'  (duration: 129.319683ms)","trace[544720244] 'compare'  (duration: 147.429635ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:10:55.937528Z","caller":"traceutil/trace.go:172","msg":"trace[1630939464] transaction","detail":"{read_only:false; response_revision:306; number_of_response:1; }","duration":"275.298122ms","start":"2025-11-09T14:10:55.662215Z","end":"2025-11-09T14:10:55.937513Z","steps":["trace[1630939464] 'process raft request'  (duration: 275.12276ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:11:25 up 53 min,  0 user,  load average: 4.15, 2.94, 1.74
	Linux pause-092489 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f7663e0568a65ffaaf3c55965b13b778d4f8efa35939c24622c19d8f69d183fa] <==
	I1109 14:11:00.067060       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:11:00.115122       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1109 14:11:00.115282       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:11:00.115306       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:11:00.115330       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:11:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:11:00.415066       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:11:00.415387       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:11:00.415456       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:11:00.415702       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1109 14:11:00.816556       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:11:00.816591       1 metrics.go:72] Registering metrics
	I1109 14:11:00.816659       1 controller.go:711] "Syncing nftables rules"
	I1109 14:11:10.317730       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1109 14:11:10.317799       1 main.go:301] handling current node
	I1109 14:11:20.324721       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1109 14:11:20.324760       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e21ada7ed93b3e0e01d2725059c4155c569e0c4090e0e3079f02bc81d5274700] <==
	I1109 14:10:51.445844       1 policy_source.go:240] refreshing policies
	E1109 14:10:51.453617       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1109 14:10:51.500993       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:10:51.523289       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:10:51.523486       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1109 14:10:51.529570       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:10:51.530169       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 14:10:51.615789       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:10:52.303599       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1109 14:10:52.307483       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1109 14:10:52.307501       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:10:52.753066       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:10:52.788470       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:10:52.909413       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1109 14:10:52.915278       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1109 14:10:52.916275       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:10:52.920781       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:10:53.326493       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:10:54.175261       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:10:54.331429       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1109 14:10:54.387904       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 14:10:58.428845       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:10:58.432148       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:10:58.777758       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1109 14:10:59.428781       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d330e1ae80e3a30bfdc4b113f673bf973656704d0579d016dd61b77458e2c396] <==
	I1109 14:10:58.317698       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:10:58.317712       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:10:58.317719       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 14:10:58.325892       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 14:10:58.325916       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1109 14:10:58.325923       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1109 14:10:58.325942       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1109 14:10:58.325971       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1109 14:10:58.326021       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 14:10:58.326113       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1109 14:10:58.327120       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1109 14:10:58.327146       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 14:10:58.329295       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1109 14:10:58.330467       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1109 14:10:58.330485       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1109 14:10:58.330530       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1109 14:10:58.330586       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1109 14:10:58.330598       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1109 14:10:58.330604       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1109 14:10:58.331681       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:10:58.336386       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-092489" podCIDRs=["10.244.0.0/24"]
	I1109 14:10:58.338474       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 14:10:58.342677       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:10:58.350032       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:11:13.277997       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e255085db448c22caf3ee1ab534518079a741907de0468fc851c20ed70ff553a] <==
	I1109 14:10:59.889455       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:10:59.964869       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:11:00.067517       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:11:00.067557       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1109 14:11:00.067683       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:11:00.085625       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:11:00.085701       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:11:00.090485       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:11:00.090834       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:11:00.090861       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:11:00.092191       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:11:00.092215       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:11:00.092228       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:11:00.092239       1 config.go:200] "Starting service config controller"
	I1109 14:11:00.092251       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:11:00.092257       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:11:00.092345       1 config.go:309] "Starting node config controller"
	I1109 14:11:00.092353       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:11:00.092360       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:11:00.192338       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:11:00.192428       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:11:00.192528       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8d495cb1f952ddb415c3231fdd998bdecdb92da5f5d74d99731326c32330b72d] <==
	E1109 14:10:51.377047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 14:10:51.377223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 14:10:51.377372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 14:10:51.377401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 14:10:51.377606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 14:10:51.377670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 14:10:51.377662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 14:10:51.377691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 14:10:51.377751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 14:10:51.377805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 14:10:51.377831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 14:10:51.377954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 14:10:51.377932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 14:10:51.378063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 14:10:51.378078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 14:10:51.378127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 14:10:52.207947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 14:10:52.215047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 14:10:52.325746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1109 14:10:52.388877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 14:10:52.434228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 14:10:52.556811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 14:10:52.559834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 14:10:52.587196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1109 14:10:55.475177       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:11:16 pause-092489 kubelet[1285]: E1109 14:11:16.889366    1285 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 09 14:11:16 pause-092489 kubelet[1285]: E1109 14:11:16.889428    1285 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:16 pause-092489 kubelet[1285]: E1109 14:11:16.889445    1285 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:16 pause-092489 kubelet[1285]: W1109 14:11:16.989772    1285 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 09 14:11:17 pause-092489 kubelet[1285]: W1109 14:11:17.172775    1285 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 09 14:11:17 pause-092489 kubelet[1285]: W1109 14:11:17.450302    1285 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 09 14:11:17 pause-092489 kubelet[1285]: W1109 14:11:17.802466    1285 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 09 14:11:17 pause-092489 kubelet[1285]: E1109 14:11:17.834919    1285 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Nov 09 14:11:17 pause-092489 kubelet[1285]: E1109 14:11:17.835012    1285 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:17 pause-092489 kubelet[1285]: E1109 14:11:17.835028    1285 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:17 pause-092489 kubelet[1285]: E1109 14:11:17.835039    1285 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:17 pause-092489 kubelet[1285]: E1109 14:11:17.890528    1285 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 09 14:11:17 pause-092489 kubelet[1285]: E1109 14:11:17.890585    1285 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:17 pause-092489 kubelet[1285]: E1109 14:11:17.890597    1285 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:18 pause-092489 kubelet[1285]: W1109 14:11:18.332609    1285 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 09 14:11:18 pause-092489 kubelet[1285]: E1109 14:11:18.844787    1285 log.go:32] "Status from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:18 pause-092489 kubelet[1285]: E1109 14:11:18.844839    1285 kubelet.go:2996] "Container runtime sanity check failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:18 pause-092489 kubelet[1285]: E1109 14:11:18.891422    1285 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 09 14:11:18 pause-092489 kubelet[1285]: E1109 14:11:18.891476    1285 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:18 pause-092489 kubelet[1285]: E1109 14:11:18.891488    1285 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:22 pause-092489 kubelet[1285]: I1109 14:11:22.543185    1285 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 09 14:11:22 pause-092489 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:11:22 pause-092489 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:11:22 pause-092489 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 09 14:11:22 pause-092489 systemd[1]: kubelet.service: Consumed 1.131s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-092489 -n pause-092489
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-092489 -n pause-092489: exit status 2 (333.951439ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-092489 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-092489
helpers_test.go:243: (dbg) docker inspect pause-092489:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3283112c9e94706d89c0a48afd5afd06921d7724dd08a41e55a2aa8d7358b447",
	        "Created": "2025-11-09T14:10:40.253996241Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 220122,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:10:40.291470556Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/3283112c9e94706d89c0a48afd5afd06921d7724dd08a41e55a2aa8d7358b447/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3283112c9e94706d89c0a48afd5afd06921d7724dd08a41e55a2aa8d7358b447/hostname",
	        "HostsPath": "/var/lib/docker/containers/3283112c9e94706d89c0a48afd5afd06921d7724dd08a41e55a2aa8d7358b447/hosts",
	        "LogPath": "/var/lib/docker/containers/3283112c9e94706d89c0a48afd5afd06921d7724dd08a41e55a2aa8d7358b447/3283112c9e94706d89c0a48afd5afd06921d7724dd08a41e55a2aa8d7358b447-json.log",
	        "Name": "/pause-092489",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-092489:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-092489",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3283112c9e94706d89c0a48afd5afd06921d7724dd08a41e55a2aa8d7358b447",
	                "LowerDir": "/var/lib/docker/overlay2/9650de60d0693888c238c1d66c538850cf40b26a7e9ad7d964d55dabc981a0b8-init/diff:/var/lib/docker/overlay2/d0c6d4b4ffbfb6af64ec3408030fc54111fe30d500088a5ae947d598cfc72f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9650de60d0693888c238c1d66c538850cf40b26a7e9ad7d964d55dabc981a0b8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9650de60d0693888c238c1d66c538850cf40b26a7e9ad7d964d55dabc981a0b8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9650de60d0693888c238c1d66c538850cf40b26a7e9ad7d964d55dabc981a0b8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-092489",
	                "Source": "/var/lib/docker/volumes/pause-092489/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-092489",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-092489",
	                "name.minikube.sigs.k8s.io": "pause-092489",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2e01f7eb320ca5aae66a6badb04816610aef4f10dbf4fba8b14c54954bc7923a",
	            "SandboxKey": "/var/run/docker/netns/2e01f7eb320c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33045"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33046"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33047"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-092489": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:60:7f:62:be:4d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8ec64412f6290172992317511185500a29434ee189e01028126e0e8cf658a217",
	                    "EndpointID": "74a56f070bedbe95ea1f9ce80cbc203108d2f435037a9846d4d95e1f88f10e88",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-092489",
	                        "3283112c9e94"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-092489 -n pause-092489
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-092489 -n pause-092489: exit status 2 (316.578981ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-092489 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-593530 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo docker system info                                                                                                                                                                                                      │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo containerd config dump                                                                                                                                                                                                  │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo crio config                                                                                                                                                                                                             │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ delete  │ -p cilium-593530                                                                                                                                                                                                                              │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │ 09 Nov 25 14:10 UTC │
	│ start   │ -p cert-options-350702 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-350702    │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │ 09 Nov 25 14:11 UTC │
	│ ssh     │ cert-options-350702 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-350702    │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ ssh     │ -p cert-options-350702 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-350702    │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ delete  │ -p cert-options-350702                                                                                                                                                                                                                        │ cert-options-350702    │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ start   │ -p pause-092489 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-092489           │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ start   │ -p old-k8s-version-169816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-169816 │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │                     │
	│ pause   │ -p pause-092489 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-092489           │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:11:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:11:14.433122  228825 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:11:14.433365  228825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:11:14.433373  228825 out.go:374] Setting ErrFile to fd 2...
	I1109 14:11:14.433378  228825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:11:14.433541  228825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:11:14.434028  228825 out.go:368] Setting JSON to false
	I1109 14:11:14.435048  228825 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3224,"bootTime":1762694250,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:11:14.435124  228825 start.go:143] virtualization: kvm guest
	I1109 14:11:14.436963  228825 out.go:179] * [old-k8s-version-169816] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:11:14.438112  228825 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:11:14.438112  228825 notify.go:221] Checking for updates...
	I1109 14:11:14.439240  228825 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:11:14.440338  228825 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:11:14.441779  228825 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 14:11:14.442807  228825 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:11:14.443981  228825 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:11:14.445611  228825 config.go:182] Loaded profile config "cert-expiration-883873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:11:14.445766  228825 config.go:182] Loaded profile config "kubernetes-upgrade-755159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:11:14.445957  228825 config.go:182] Loaded profile config "pause-092489": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:11:14.446063  228825 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:11:14.469634  228825 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 14:11:14.469779  228825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:11:14.528499  228825 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-09 14:11:14.51874498 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:11:14.528637  228825 docker.go:319] overlay module found
	I1109 14:11:14.530001  228825 out.go:179] * Using the docker driver based on user configuration
	I1109 14:11:14.530944  228825 start.go:309] selected driver: docker
	I1109 14:11:14.530960  228825 start.go:930] validating driver "docker" against <nil>
	I1109 14:11:14.530979  228825 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:11:14.531522  228825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:11:14.589269  228825 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-09 14:11:14.578584959 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:11:14.589455  228825 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 14:11:14.589679  228825 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:11:14.591083  228825 out.go:179] * Using Docker driver with root privileges
	I1109 14:11:14.592060  228825 cni.go:84] Creating CNI manager for ""
	I1109 14:11:14.592125  228825 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:11:14.592138  228825 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 14:11:14.592193  228825 start.go:353] cluster config:
	{Name:old-k8s-version-169816 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-169816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:11:14.593288  228825 out.go:179] * Starting "old-k8s-version-169816" primary control-plane node in "old-k8s-version-169816" cluster
	I1109 14:11:14.594161  228825 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:11:14.595160  228825 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:11:14.596239  228825 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1109 14:11:14.596276  228825 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1109 14:11:14.596290  228825 cache.go:65] Caching tarball of preloaded images
	I1109 14:11:14.596332  228825 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:11:14.596400  228825 preload.go:238] Found /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 14:11:14.596416  228825 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1109 14:11:14.596533  228825 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/config.json ...
	I1109 14:11:14.596565  228825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/config.json: {Name:mk13069d07b835bb3fb802a66fbc1e8d8b175551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:14.617803  228825 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:11:14.617826  228825 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:11:14.617844  228825 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:11:14.617874  228825 start.go:360] acquireMachinesLock for old-k8s-version-169816: {Name:mkedf065ffc7d3ee8fd51a7c60a11da8a2f72508 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:11:14.617971  228825 start.go:364] duration metric: took 79.183µs to acquireMachinesLock for "old-k8s-version-169816"
	I1109 14:11:14.617995  228825 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-169816 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-169816 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:11:14.618080  228825 start.go:125] createHost starting for "" (driver="docker")
	I1109 14:11:13.982712  228465 out.go:252] * Updating the running docker "pause-092489" container ...
	I1109 14:11:13.982754  228465 machine.go:94] provisionDockerMachine start ...
	I1109 14:11:13.982848  228465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-092489
	I1109 14:11:13.999841  228465 main.go:143] libmachine: Using SSH client type: native
	I1109 14:11:14.000101  228465 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33045 <nil> <nil>}
	I1109 14:11:14.000114  228465 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:11:14.124883  228465 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-092489
	
	I1109 14:11:14.124917  228465 ubuntu.go:182] provisioning hostname "pause-092489"
	I1109 14:11:14.124976  228465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-092489
	I1109 14:11:14.142898  228465 main.go:143] libmachine: Using SSH client type: native
	I1109 14:11:14.143161  228465 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33045 <nil> <nil>}
	I1109 14:11:14.143176  228465 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-092489 && echo "pause-092489" | sudo tee /etc/hostname
	I1109 14:11:14.281008  228465 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-092489
	
	I1109 14:11:14.281073  228465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-092489
	I1109 14:11:14.299970  228465 main.go:143] libmachine: Using SSH client type: native
	I1109 14:11:14.300193  228465 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33045 <nil> <nil>}
	I1109 14:11:14.300216  228465 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-092489' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-092489/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-092489' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:11:14.427271  228465 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:11:14.427300  228465 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-5854/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-5854/.minikube}
	I1109 14:11:14.427317  228465 ubuntu.go:190] setting up certificates
	I1109 14:11:14.427334  228465 provision.go:84] configureAuth start
	I1109 14:11:14.427378  228465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-092489
	I1109 14:11:14.447104  228465 provision.go:143] copyHostCerts
	I1109 14:11:14.447166  228465 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem, removing ...
	I1109 14:11:14.447185  228465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem
	I1109 14:11:14.447272  228465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem (1123 bytes)
	I1109 14:11:14.447423  228465 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem, removing ...
	I1109 14:11:14.447444  228465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem
	I1109 14:11:14.447486  228465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem (1679 bytes)
	I1109 14:11:14.447587  228465 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem, removing ...
	I1109 14:11:14.447598  228465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem
	I1109 14:11:14.447634  228465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem (1078 bytes)
	I1109 14:11:14.447723  228465 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem org=jenkins.pause-092489 san=[127.0.0.1 192.168.103.2 localhost minikube pause-092489]
	I1109 14:11:14.543595  228465 provision.go:177] copyRemoteCerts
	I1109 14:11:14.543679  228465 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:11:14.543722  228465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-092489
	I1109 14:11:14.566434  228465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/pause-092489/id_rsa Username:docker}
	I1109 14:11:14.664136  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 14:11:14.682161  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1109 14:11:14.701792  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:11:14.719557  228465 provision.go:87] duration metric: took 292.210982ms to configureAuth
	I1109 14:11:14.719585  228465 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:11:14.719772  228465 config.go:182] Loaded profile config "pause-092489": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:11:14.719852  228465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-092489
	I1109 14:11:14.738348  228465 main.go:143] libmachine: Using SSH client type: native
	I1109 14:11:14.738630  228465 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33045 <nil> <nil>}
	I1109 14:11:14.738698  228465 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:11:15.052519  228465 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:11:15.052545  228465 machine.go:97] duration metric: took 1.069769509s to provisionDockerMachine
	I1109 14:11:15.052559  228465 start.go:293] postStartSetup for "pause-092489" (driver="docker")
	I1109 14:11:15.052571  228465 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:11:15.052663  228465 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:11:15.052713  228465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-092489
	I1109 14:11:15.073932  228465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/pause-092489/id_rsa Username:docker}
	I1109 14:11:15.180117  228465 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:11:15.183715  228465 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:11:15.183745  228465 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:11:15.183756  228465 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/addons for local assets ...
	I1109 14:11:15.183804  228465 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/files for local assets ...
	I1109 14:11:15.183873  228465 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem -> 93652.pem in /etc/ssl/certs
	I1109 14:11:15.183964  228465 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:11:15.192797  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:11:15.209810  228465 start.go:296] duration metric: took 157.237895ms for postStartSetup
	I1109 14:11:15.209880  228465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:11:15.209925  228465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-092489
	I1109 14:11:15.231110  228465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/pause-092489/id_rsa Username:docker}
	I1109 14:11:15.334127  228465 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:11:15.339126  228465 fix.go:56] duration metric: took 1.377151789s for fixHost
	I1109 14:11:15.339154  228465 start.go:83] releasing machines lock for "pause-092489", held for 1.377206222s
	I1109 14:11:15.339230  228465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-092489
	I1109 14:11:15.356999  228465 ssh_runner.go:195] Run: cat /version.json
	I1109 14:11:15.357051  228465 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:11:15.357057  228465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-092489
	I1109 14:11:15.357105  228465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-092489
	I1109 14:11:15.376528  228465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/pause-092489/id_rsa Username:docker}
	I1109 14:11:15.376866  228465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/pause-092489/id_rsa Username:docker}
	I1109 14:11:15.548199  228465 ssh_runner.go:195] Run: systemctl --version
	I1109 14:11:15.554701  228465 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:11:15.588394  228465 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:11:15.593081  228465 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:11:15.593136  228465 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:11:15.601406  228465 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:11:15.601432  228465 start.go:496] detecting cgroup driver to use...
	I1109 14:11:15.601464  228465 detect.go:190] detected "systemd" cgroup driver on host os
	I1109 14:11:15.601515  228465 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:11:15.615546  228465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:11:15.628182  228465 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:11:15.628250  228465 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:11:15.643522  228465 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:11:15.655578  228465 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:11:15.765851  228465 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:11:15.878148  228465 docker.go:234] disabling docker service ...
	I1109 14:11:15.878203  228465 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:11:15.893401  228465 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:11:15.906430  228465 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:11:16.015567  228465 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:11:16.131691  228465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:11:16.144365  228465 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:11:16.158466  228465 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:11:16.158512  228465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:16.167488  228465 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1109 14:11:16.167549  228465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:16.176841  228465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:16.185834  228465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:16.194323  228465 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:11:16.203017  228465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:16.212414  228465 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:16.220619  228465 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:16.229483  228465 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:11:16.237238  228465 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:11:16.245695  228465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:11:16.382251  228465 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:11:16.180162  188127 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1109 14:11:16.180599  188127 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1109 14:11:16.180671  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1109 14:11:16.180726  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1109 14:11:16.208145  188127 cri.go:89] found id: "278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:16.208163  188127 cri.go:89] found id: ""
	I1109 14:11:16.208172  188127 logs.go:282] 1 containers: [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689]
	I1109 14:11:16.208221  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:16.212212  188127 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1109 14:11:16.212272  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1109 14:11:16.241268  188127 cri.go:89] found id: ""
	I1109 14:11:16.241294  188127 logs.go:282] 0 containers: []
	W1109 14:11:16.241304  188127 logs.go:284] No container was found matching "etcd"
	I1109 14:11:16.241312  188127 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1109 14:11:16.241359  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1109 14:11:16.271861  188127 cri.go:89] found id: ""
	I1109 14:11:16.271885  188127 logs.go:282] 0 containers: []
	W1109 14:11:16.271893  188127 logs.go:284] No container was found matching "coredns"
	I1109 14:11:16.271900  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1109 14:11:16.271950  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1109 14:11:16.307010  188127 cri.go:89] found id: "5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:16.307041  188127 cri.go:89] found id: ""
	I1109 14:11:16.307052  188127 logs.go:282] 1 containers: [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82]
	I1109 14:11:16.307107  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:16.311855  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1109 14:11:16.311918  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1109 14:11:16.340890  188127 cri.go:89] found id: ""
	I1109 14:11:16.340916  188127 logs.go:282] 0 containers: []
	W1109 14:11:16.340927  188127 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:11:16.340935  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1109 14:11:16.340996  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1109 14:11:16.371701  188127 cri.go:89] found id: "b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:16.371726  188127 cri.go:89] found id: "b02359ee13a07cadd6359d3fb2ebf8cca84546b60e170eaf5853736affecd2d4"
	I1109 14:11:16.371732  188127 cri.go:89] found id: ""
	I1109 14:11:16.371742  188127 logs.go:282] 2 containers: [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd b02359ee13a07cadd6359d3fb2ebf8cca84546b60e170eaf5853736affecd2d4]
	I1109 14:11:16.371798  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:16.375997  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:16.380227  188127 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1109 14:11:16.380279  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1109 14:11:16.412068  188127 cri.go:89] found id: ""
	I1109 14:11:16.412097  188127 logs.go:282] 0 containers: []
	W1109 14:11:16.412107  188127 logs.go:284] No container was found matching "kindnet"
	I1109 14:11:16.412115  188127 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1109 14:11:16.412171  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1109 14:11:16.438766  188127 cri.go:89] found id: ""
	I1109 14:11:16.438788  188127 logs.go:282] 0 containers: []
	W1109 14:11:16.438796  188127 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:11:16.438810  188127 logs.go:123] Gathering logs for kubelet ...
	I1109 14:11:16.438822  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:11:16.521585  188127 logs.go:123] Gathering logs for dmesg ...
	I1109 14:11:16.521629  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:11:16.538010  188127 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:11:16.538047  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:11:16.594149  188127 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:11:16.594175  188127 logs.go:123] Gathering logs for kube-scheduler [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82] ...
	I1109 14:11:16.594193  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:16.662427  188127 logs.go:123] Gathering logs for kube-controller-manager [b02359ee13a07cadd6359d3fb2ebf8cca84546b60e170eaf5853736affecd2d4] ...
	I1109 14:11:16.662471  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b02359ee13a07cadd6359d3fb2ebf8cca84546b60e170eaf5853736affecd2d4"
	I1109 14:11:16.691486  188127 logs.go:123] Gathering logs for CRI-O ...
	I1109 14:11:16.691523  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1109 14:11:16.737113  188127 logs.go:123] Gathering logs for container status ...
	I1109 14:11:16.737147  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:11:16.769940  188127 logs.go:123] Gathering logs for kube-apiserver [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689] ...
	I1109 14:11:16.769983  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:16.802590  188127 logs.go:123] Gathering logs for kube-controller-manager [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd] ...
	I1109 14:11:16.802619  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:19.099666  228465 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.717360765s)
	I1109 14:11:19.099700  228465 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:11:19.099747  228465 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:11:19.104785  228465 start.go:564] Will wait 60s for crictl version
	I1109 14:11:19.104833  228465 ssh_runner.go:195] Run: which crictl
	I1109 14:11:19.108561  228465 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:11:19.132970  228465 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:11:19.133033  228465 ssh_runner.go:195] Run: crio --version
	I1109 14:11:19.164286  228465 ssh_runner.go:195] Run: crio --version
	I1109 14:11:19.193242  228465 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:11:14.619545  228825 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1109 14:11:14.619760  228825 start.go:159] libmachine.API.Create for "old-k8s-version-169816" (driver="docker")
	I1109 14:11:14.619791  228825 client.go:173] LocalClient.Create starting
	I1109 14:11:14.619870  228825 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem
	I1109 14:11:14.619909  228825 main.go:143] libmachine: Decoding PEM data...
	I1109 14:11:14.619938  228825 main.go:143] libmachine: Parsing certificate...
	I1109 14:11:14.620017  228825 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem
	I1109 14:11:14.620048  228825 main.go:143] libmachine: Decoding PEM data...
	I1109 14:11:14.620063  228825 main.go:143] libmachine: Parsing certificate...
	I1109 14:11:14.620387  228825 cli_runner.go:164] Run: docker network inspect old-k8s-version-169816 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 14:11:14.636497  228825 cli_runner.go:211] docker network inspect old-k8s-version-169816 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 14:11:14.636557  228825 network_create.go:284] running [docker network inspect old-k8s-version-169816] to gather additional debugging logs...
	I1109 14:11:14.636576  228825 cli_runner.go:164] Run: docker network inspect old-k8s-version-169816
	W1109 14:11:14.652136  228825 cli_runner.go:211] docker network inspect old-k8s-version-169816 returned with exit code 1
	I1109 14:11:14.652158  228825 network_create.go:287] error running [docker network inspect old-k8s-version-169816]: docker network inspect old-k8s-version-169816: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-169816 not found
	I1109 14:11:14.652168  228825 network_create.go:289] output of [docker network inspect old-k8s-version-169816]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-169816 not found
	
	** /stderr **
	I1109 14:11:14.652301  228825 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:11:14.669546  228825 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c085420cd01a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:dd:1a:ae:de:73} reservation:<nil>}
	I1109 14:11:14.670484  228825 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-227a1511ff5a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4a:00:84:88:a9:17} reservation:<nil>}
	I1109 14:11:14.671341  228825 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9196665a99b4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:6c:f7:d0:28:4f} reservation:<nil>}
	I1109 14:11:14.672188  228825 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e75730}
	I1109 14:11:14.672208  228825 network_create.go:124] attempt to create docker network old-k8s-version-169816 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1109 14:11:14.672253  228825 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-169816 old-k8s-version-169816
	I1109 14:11:14.733198  228825 network_create.go:108] docker network old-k8s-version-169816 192.168.76.0/24 created
	I1109 14:11:14.733226  228825 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-169816" container
	I1109 14:11:14.733275  228825 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 14:11:14.751440  228825 cli_runner.go:164] Run: docker volume create old-k8s-version-169816 --label name.minikube.sigs.k8s.io=old-k8s-version-169816 --label created_by.minikube.sigs.k8s.io=true
	I1109 14:11:14.769040  228825 oci.go:103] Successfully created a docker volume old-k8s-version-169816
	I1109 14:11:14.769115  228825 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-169816-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-169816 --entrypoint /usr/bin/test -v old-k8s-version-169816:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 14:11:15.143818  228825 oci.go:107] Successfully prepared a docker volume old-k8s-version-169816
	I1109 14:11:15.143878  228825 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1109 14:11:15.143886  228825 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 14:11:15.143942  228825 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-169816:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 14:11:19.038182  228825 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-169816:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.894188837s)
	I1109 14:11:19.038211  228825 kic.go:203] duration metric: took 3.894322602s to extract preloaded images to volume ...
	W1109 14:11:19.038309  228825 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1109 14:11:19.038340  228825 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1109 14:11:19.038382  228825 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 14:11:19.097924  228825 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-169816 --name old-k8s-version-169816 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-169816 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-169816 --network old-k8s-version-169816 --ip 192.168.76.2 --volume old-k8s-version-169816:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 14:11:19.430702  228825 cli_runner.go:164] Run: docker container inspect old-k8s-version-169816 --format={{.State.Running}}
	I1109 14:11:19.194310  228465 cli_runner.go:164] Run: docker network inspect pause-092489 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:11:19.211021  228465 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1109 14:11:19.214970  228465 kubeadm.go:884] updating cluster {Name:pause-092489 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-092489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regis
try-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:11:19.215114  228465 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:11:19.215163  228465 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:11:19.244428  228465 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:11:19.244449  228465 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:11:19.244500  228465 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:11:19.279598  228465 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:11:19.279627  228465 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:11:19.279653  228465 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1109 14:11:19.280017  228465 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-092489 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-092489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:11:19.280105  228465 ssh_runner.go:195] Run: crio config
	I1109 14:11:19.329078  228465 cni.go:84] Creating CNI manager for ""
	I1109 14:11:19.329096  228465 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:11:19.329107  228465 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:11:19.329126  228465 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-092489 NodeName:pause-092489 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:11:19.329239  228465 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-092489"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:11:19.329296  228465 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:11:19.339152  228465 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:11:19.339223  228465 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:11:19.347204  228465 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 14:11:19.361081  228465 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:11:19.374884  228465 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1109 14:11:19.388903  228465 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:11:19.392893  228465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:11:19.535122  228465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:11:19.550858  228465 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489 for IP: 192.168.103.2
	I1109 14:11:19.550879  228465 certs.go:195] generating shared ca certs ...
	I1109 14:11:19.550898  228465 certs.go:227] acquiring lock for ca certs: {Name:mk1fbc7fb5aaf0e87090d75145f91c095ef07289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:19.551056  228465 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key
	I1109 14:11:19.551111  228465 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key
	I1109 14:11:19.551124  228465 certs.go:257] generating profile certs ...
	I1109 14:11:19.551283  228465 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/client.key
	I1109 14:11:19.551359  228465 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/apiserver.key.451f2da0
	I1109 14:11:19.551414  228465 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/proxy-client.key
	I1109 14:11:19.551576  228465 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem (1338 bytes)
	W1109 14:11:19.551620  228465 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365_empty.pem, impossibly tiny 0 bytes
	I1109 14:11:19.551629  228465 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:11:19.551718  228465 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 14:11:19.551750  228465 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:11:19.551780  228465 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 14:11:19.551835  228465 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:11:19.552728  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:11:19.572308  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:11:19.596767  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:11:19.614677  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:11:19.633350  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1109 14:11:19.653028  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:11:19.676167  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:11:19.696461  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:11:19.716292  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /usr/share/ca-certificates/93652.pem (1708 bytes)
	I1109 14:11:19.745095  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:11:19.767471  228465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem --> /usr/share/ca-certificates/9365.pem (1338 bytes)
	I1109 14:11:19.786236  228465 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:11:19.802077  228465 ssh_runner.go:195] Run: openssl version
	I1109 14:11:19.810387  228465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9365.pem && ln -fs /usr/share/ca-certificates/9365.pem /etc/ssl/certs/9365.pem"
	I1109 14:11:19.822332  228465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9365.pem
	I1109 14:11:19.828260  228465 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:34 /usr/share/ca-certificates/9365.pem
	I1109 14:11:19.828353  228465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9365.pem
	I1109 14:11:19.882099  228465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9365.pem /etc/ssl/certs/51391683.0"
	I1109 14:11:19.893535  228465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93652.pem && ln -fs /usr/share/ca-certificates/93652.pem /etc/ssl/certs/93652.pem"
	I1109 14:11:19.904103  228465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93652.pem
	I1109 14:11:19.908965  228465 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:34 /usr/share/ca-certificates/93652.pem
	I1109 14:11:19.909012  228465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93652.pem
	I1109 14:11:19.955271  228465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93652.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:11:19.964564  228465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:11:19.974757  228465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:11:19.979353  228465 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:11:19.979403  228465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:11:20.022912  228465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:11:20.031481  228465 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:11:20.035276  228465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:11:20.071018  228465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:11:20.112023  228465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:11:20.149085  228465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:11:20.182706  228465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:11:20.215817  228465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:11:20.252170  228465 kubeadm.go:401] StartCluster: {Name:pause-092489 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-092489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry
-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:11:20.252299  228465 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:11:20.252336  228465 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:11:20.278537  228465 cri.go:89] found id: "2b5b10f4f3f84bb026e0851a309a45699c18e38c6152d977136df4d3d7ce824f"
	I1109 14:11:20.278562  228465 cri.go:89] found id: "f7663e0568a65ffaaf3c55965b13b778d4f8efa35939c24622c19d8f69d183fa"
	I1109 14:11:20.278569  228465 cri.go:89] found id: "e255085db448c22caf3ee1ab534518079a741907de0468fc851c20ed70ff553a"
	I1109 14:11:20.278573  228465 cri.go:89] found id: "d330e1ae80e3a30bfdc4b113f673bf973656704d0579d016dd61b77458e2c396"
	I1109 14:11:20.278578  228465 cri.go:89] found id: "8d4c0bf15d6f73166eefeb2a225847ad6656500266e6c4258eabfbaab89d2c19"
	I1109 14:11:20.278582  228465 cri.go:89] found id: "8d495cb1f952ddb415c3231fdd998bdecdb92da5f5d74d99731326c32330b72d"
	I1109 14:11:20.278587  228465 cri.go:89] found id: "e21ada7ed93b3e0e01d2725059c4155c569e0c4090e0e3079f02bc81d5274700"
	I1109 14:11:20.278591  228465 cri.go:89] found id: ""
	I1109 14:11:20.278624  228465 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 14:11:20.290452  228465 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:11:20Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:11:20.290518  228465 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:11:20.298218  228465 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:11:20.298236  228465 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:11:20.298274  228465 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:11:20.305471  228465 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:11:20.306130  228465 kubeconfig.go:125] found "pause-092489" server: "https://192.168.103.2:8443"
	I1109 14:11:20.307010  228465 kapi.go:59] client config for pause-092489: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/client.key", CAFile:"/home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 14:11:20.307361  228465 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1109 14:11:20.307379  228465 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1109 14:11:20.307386  228465 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1109 14:11:20.307392  228465 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1109 14:11:20.307398  228465 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1109 14:11:20.307701  228465 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:11:20.315312  228465 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1109 14:11:20.315341  228465 kubeadm.go:602] duration metric: took 17.09827ms to restartPrimaryControlPlane
	I1109 14:11:20.315351  228465 kubeadm.go:403] duration metric: took 63.187039ms to StartCluster
	I1109 14:11:20.315365  228465 settings.go:142] acquiring lock: {Name:mk4e77b290eba64589404bd2a5f48c72505ab262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:20.315429  228465 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:11:20.316791  228465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:20.317023  228465 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:11:20.317099  228465 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:11:20.317288  228465 config.go:182] Loaded profile config "pause-092489": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:11:20.319113  228465 out.go:179] * Verifying Kubernetes components...
	I1109 14:11:20.319115  228465 out.go:179] * Enabled addons: 
	I1109 14:11:20.320077  228465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:11:20.320114  228465 addons.go:515] duration metric: took 3.021449ms for enable addons: enabled=[]
	I1109 14:11:20.420564  228465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:11:20.433045  228465 node_ready.go:35] waiting up to 6m0s for node "pause-092489" to be "Ready" ...
	I1109 14:11:20.440498  228465 node_ready.go:49] node "pause-092489" is "Ready"
	I1109 14:11:20.440527  228465 node_ready.go:38] duration metric: took 7.453948ms for node "pause-092489" to be "Ready" ...
	I1109 14:11:20.440537  228465 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:11:20.440569  228465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:11:20.451690  228465 api_server.go:72] duration metric: took 134.63563ms to wait for apiserver process to appear ...
	I1109 14:11:20.451716  228465 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:11:20.451733  228465 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1109 14:11:20.455755  228465 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1109 14:11:20.456492  228465 api_server.go:141] control plane version: v1.34.1
	I1109 14:11:20.456511  228465 api_server.go:131] duration metric: took 4.789699ms to wait for apiserver health ...
	I1109 14:11:20.456518  228465 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:11:20.459676  228465 system_pods.go:59] 7 kube-system pods found
	I1109 14:11:20.459704  228465 system_pods.go:61] "coredns-66bc5c9577-z82qd" [0bab7054-1d49-4279-9e6f-62c7dd91785d] Running
	I1109 14:11:20.459711  228465 system_pods.go:61] "etcd-pause-092489" [b96ed30d-4f9d-4286-a1ce-3fbb472b684d] Running
	I1109 14:11:20.459719  228465 system_pods.go:61] "kindnet-h2j52" [61515b37-d564-420e-b3b9-9814a711b0f4] Running
	I1109 14:11:20.459727  228465 system_pods.go:61] "kube-apiserver-pause-092489" [783c57f0-2ba9-45dd-8f73-66ff35cc8a4e] Running
	I1109 14:11:20.459730  228465 system_pods.go:61] "kube-controller-manager-pause-092489" [c0464d94-b6aa-412a-91fa-76112d2b375d] Running
	I1109 14:11:20.459736  228465 system_pods.go:61] "kube-proxy-j62h5" [d33cd6cb-b566-4fe8-81c8-13a78abcf6c0] Running
	I1109 14:11:20.459739  228465 system_pods.go:61] "kube-scheduler-pause-092489" [8889c9f6-9e92-4287-b1c1-abeb0c5048ba] Running
	I1109 14:11:20.459748  228465 system_pods.go:74] duration metric: took 3.225072ms to wait for pod list to return data ...
	I1109 14:11:20.459759  228465 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:11:20.461523  228465 default_sa.go:45] found service account: "default"
	I1109 14:11:20.461538  228465 default_sa.go:55] duration metric: took 1.771013ms for default service account to be created ...
	I1109 14:11:20.461545  228465 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:11:20.463861  228465 system_pods.go:86] 7 kube-system pods found
	I1109 14:11:20.463880  228465 system_pods.go:89] "coredns-66bc5c9577-z82qd" [0bab7054-1d49-4279-9e6f-62c7dd91785d] Running
	I1109 14:11:20.463884  228465 system_pods.go:89] "etcd-pause-092489" [b96ed30d-4f9d-4286-a1ce-3fbb472b684d] Running
	I1109 14:11:20.463888  228465 system_pods.go:89] "kindnet-h2j52" [61515b37-d564-420e-b3b9-9814a711b0f4] Running
	I1109 14:11:20.463891  228465 system_pods.go:89] "kube-apiserver-pause-092489" [783c57f0-2ba9-45dd-8f73-66ff35cc8a4e] Running
	I1109 14:11:20.463894  228465 system_pods.go:89] "kube-controller-manager-pause-092489" [c0464d94-b6aa-412a-91fa-76112d2b375d] Running
	I1109 14:11:20.463898  228465 system_pods.go:89] "kube-proxy-j62h5" [d33cd6cb-b566-4fe8-81c8-13a78abcf6c0] Running
	I1109 14:11:20.463901  228465 system_pods.go:89] "kube-scheduler-pause-092489" [8889c9f6-9e92-4287-b1c1-abeb0c5048ba] Running
	I1109 14:11:20.463906  228465 system_pods.go:126] duration metric: took 2.356919ms to wait for k8s-apps to be running ...
	I1109 14:11:20.463915  228465 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:11:20.463944  228465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:11:20.475573  228465 system_svc.go:56] duration metric: took 11.653869ms WaitForService to wait for kubelet
	I1109 14:11:20.475594  228465 kubeadm.go:587] duration metric: took 158.541103ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:11:20.475617  228465 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:11:20.477452  228465 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:11:20.477471  228465 node_conditions.go:123] node cpu capacity is 8
	I1109 14:11:20.477481  228465 node_conditions.go:105] duration metric: took 1.859163ms to run NodePressure ...
	I1109 14:11:20.477490  228465 start.go:242] waiting for startup goroutines ...
	I1109 14:11:20.477497  228465 start.go:247] waiting for cluster config update ...
	I1109 14:11:20.477503  228465 start.go:256] writing updated cluster config ...
	I1109 14:11:20.477786  228465 ssh_runner.go:195] Run: rm -f paused
	I1109 14:11:20.481118  228465 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:11:20.481776  228465 kapi.go:59] client config for pause-092489: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-5854/.minikube/profiles/pause-092489/client.key", CAFile:"/home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 14:11:20.483722  228465 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z82qd" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:20.487268  228465 pod_ready.go:94] pod "coredns-66bc5c9577-z82qd" is "Ready"
	I1109 14:11:20.487285  228465 pod_ready.go:86] duration metric: took 3.543159ms for pod "coredns-66bc5c9577-z82qd" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:20.488866  228465 pod_ready.go:83] waiting for pod "etcd-pause-092489" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:20.492081  228465 pod_ready.go:94] pod "etcd-pause-092489" is "Ready"
	I1109 14:11:20.492099  228465 pod_ready.go:86] duration metric: took 3.218512ms for pod "etcd-pause-092489" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:20.493749  228465 pod_ready.go:83] waiting for pod "kube-apiserver-pause-092489" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:20.496968  228465 pod_ready.go:94] pod "kube-apiserver-pause-092489" is "Ready"
	I1109 14:11:20.496984  228465 pod_ready.go:86] duration metric: took 3.217579ms for pod "kube-apiserver-pause-092489" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:20.498608  228465 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-092489" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:20.885264  228465 pod_ready.go:94] pod "kube-controller-manager-pause-092489" is "Ready"
	I1109 14:11:20.885292  228465 pod_ready.go:86] duration metric: took 386.667017ms for pod "kube-controller-manager-pause-092489" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:21.085178  228465 pod_ready.go:83] waiting for pod "kube-proxy-j62h5" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:21.485433  228465 pod_ready.go:94] pod "kube-proxy-j62h5" is "Ready"
	I1109 14:11:21.485458  228465 pod_ready.go:86] duration metric: took 400.259087ms for pod "kube-proxy-j62h5" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:21.685552  228465 pod_ready.go:83] waiting for pod "kube-scheduler-pause-092489" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:22.085178  228465 pod_ready.go:94] pod "kube-scheduler-pause-092489" is "Ready"
	I1109 14:11:22.085202  228465 pod_ready.go:86] duration metric: took 399.627478ms for pod "kube-scheduler-pause-092489" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:11:22.085212  228465 pod_ready.go:40] duration metric: took 1.60406283s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:11:22.129110  228465 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:11:22.130653  228465 out.go:179] * Done! kubectl is now configured to use "pause-092489" cluster and "default" namespace by default
	I1109 14:11:19.453020  228825 cli_runner.go:164] Run: docker container inspect old-k8s-version-169816 --format={{.State.Status}}
	I1109 14:11:19.474715  228825 cli_runner.go:164] Run: docker exec old-k8s-version-169816 stat /var/lib/dpkg/alternatives/iptables
	I1109 14:11:19.522662  228825 oci.go:144] the created container "old-k8s-version-169816" has a running status.
	I1109 14:11:19.522697  228825 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/old-k8s-version-169816/id_rsa...
	I1109 14:11:19.783094  228825 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-5854/.minikube/machines/old-k8s-version-169816/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 14:11:19.813444  228825 cli_runner.go:164] Run: docker container inspect old-k8s-version-169816 --format={{.State.Status}}
	I1109 14:11:19.838420  228825 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 14:11:19.838442  228825 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-169816 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 14:11:19.889977  228825 cli_runner.go:164] Run: docker container inspect old-k8s-version-169816 --format={{.State.Status}}
	I1109 14:11:19.911392  228825 machine.go:94] provisionDockerMachine start ...
	I1109 14:11:19.911484  228825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:11:19.932091  228825 main.go:143] libmachine: Using SSH client type: native
	I1109 14:11:19.932454  228825 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33055 <nil> <nil>}
	I1109 14:11:19.932479  228825 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:11:20.067382  228825 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-169816
	
	I1109 14:11:20.067409  228825 ubuntu.go:182] provisioning hostname "old-k8s-version-169816"
	I1109 14:11:20.067467  228825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:11:20.085885  228825 main.go:143] libmachine: Using SSH client type: native
	I1109 14:11:20.086151  228825 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33055 <nil> <nil>}
	I1109 14:11:20.086169  228825 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-169816 && echo "old-k8s-version-169816" | sudo tee /etc/hostname
	I1109 14:11:20.223416  228825 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-169816
	
	I1109 14:11:20.223496  228825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:11:20.240670  228825 main.go:143] libmachine: Using SSH client type: native
	I1109 14:11:20.240881  228825 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33055 <nil> <nil>}
	I1109 14:11:20.240903  228825 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-169816' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-169816/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-169816' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:11:20.367782  228825 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:11:20.367816  228825 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-5854/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-5854/.minikube}
	I1109 14:11:20.367838  228825 ubuntu.go:190] setting up certificates
	I1109 14:11:20.367858  228825 provision.go:84] configureAuth start
	I1109 14:11:20.367911  228825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-169816
	I1109 14:11:20.386316  228825 provision.go:143] copyHostCerts
	I1109 14:11:20.386370  228825 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem, removing ...
	I1109 14:11:20.386380  228825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem
	I1109 14:11:20.386446  228825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem (1123 bytes)
	I1109 14:11:20.386534  228825 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem, removing ...
	I1109 14:11:20.386542  228825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem
	I1109 14:11:20.386570  228825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem (1679 bytes)
	I1109 14:11:20.386627  228825 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem, removing ...
	I1109 14:11:20.386649  228825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem
	I1109 14:11:20.386692  228825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem (1078 bytes)
	I1109 14:11:20.386751  228825 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-169816 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-169816]
	I1109 14:11:20.542962  228825 provision.go:177] copyRemoteCerts
	I1109 14:11:20.543012  228825 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:11:20.543053  228825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:11:20.560555  228825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/old-k8s-version-169816/id_rsa Username:docker}
	I1109 14:11:20.652227  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 14:11:20.670546  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1109 14:11:20.687030  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:11:20.703111  228825 provision.go:87] duration metric: took 335.237857ms to configureAuth
	I1109 14:11:20.703131  228825 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:11:20.703294  228825 config.go:182] Loaded profile config "old-k8s-version-169816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1109 14:11:20.703390  228825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:11:20.721164  228825 main.go:143] libmachine: Using SSH client type: native
	I1109 14:11:20.721364  228825 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33055 <nil> <nil>}
	I1109 14:11:20.721386  228825 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:11:20.955779  228825 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:11:20.955799  228825 machine.go:97] duration metric: took 1.044378291s to provisionDockerMachine
	I1109 14:11:20.955809  228825 client.go:176] duration metric: took 6.336010633s to LocalClient.Create
	I1109 14:11:20.955825  228825 start.go:167] duration metric: took 6.336066137s to libmachine.API.Create "old-k8s-version-169816"
	I1109 14:11:20.955833  228825 start.go:293] postStartSetup for "old-k8s-version-169816" (driver="docker")
	I1109 14:11:20.955845  228825 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:11:20.955910  228825 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:11:20.955948  228825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:11:20.973812  228825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/old-k8s-version-169816/id_rsa Username:docker}
	I1109 14:11:21.066540  228825 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:11:21.069723  228825 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:11:21.069748  228825 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:11:21.069757  228825 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/addons for local assets ...
	I1109 14:11:21.069796  228825 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/files for local assets ...
	I1109 14:11:21.069874  228825 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem -> 93652.pem in /etc/ssl/certs
	I1109 14:11:21.069968  228825 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:11:21.077358  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:11:21.096272  228825 start.go:296] duration metric: took 140.427591ms for postStartSetup
	I1109 14:11:21.096622  228825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-169816
	I1109 14:11:21.114617  228825 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/config.json ...
	I1109 14:11:21.114877  228825 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:11:21.114919  228825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:11:21.131455  228825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/old-k8s-version-169816/id_rsa Username:docker}
	I1109 14:11:21.220238  228825 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:11:21.224546  228825 start.go:128] duration metric: took 6.606451463s to createHost
	I1109 14:11:21.224567  228825 start.go:83] releasing machines lock for "old-k8s-version-169816", held for 6.606584094s
	I1109 14:11:21.224633  228825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-169816
	I1109 14:11:21.241784  228825 ssh_runner.go:195] Run: cat /version.json
	I1109 14:11:21.241835  228825 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:11:21.241849  228825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:11:21.241905  228825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:11:21.259501  228825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/old-k8s-version-169816/id_rsa Username:docker}
	I1109 14:11:21.260089  228825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/old-k8s-version-169816/id_rsa Username:docker}
	I1109 14:11:21.348028  228825 ssh_runner.go:195] Run: systemctl --version
	I1109 14:11:21.401085  228825 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:11:21.433714  228825 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:11:21.438118  228825 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:11:21.438169  228825 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:11:21.462633  228825 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1109 14:11:21.462684  228825 start.go:496] detecting cgroup driver to use...
	I1109 14:11:21.462714  228825 detect.go:190] detected "systemd" cgroup driver on host os
	I1109 14:11:21.462762  228825 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:11:21.477467  228825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:11:21.489210  228825 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:11:21.489267  228825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:11:21.504428  228825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:11:21.521805  228825 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:11:21.602291  228825 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:11:21.684742  228825 docker.go:234] disabling docker service ...
	I1109 14:11:21.684811  228825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:11:21.703355  228825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:11:21.714710  228825 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:11:21.793855  228825 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:11:21.877841  228825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:11:21.889592  228825 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:11:21.903077  228825 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1109 14:11:21.903137  228825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:21.912675  228825 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1109 14:11:21.912729  228825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:21.920886  228825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:21.928752  228825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:21.936721  228825 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:11:21.944903  228825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:21.952675  228825 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:21.964888  228825 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:21.973573  228825 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:11:21.980280  228825 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:11:21.987179  228825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:11:22.065034  228825 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:11:22.177474  228825 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:11:22.177537  228825 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:11:22.181340  228825 start.go:564] Will wait 60s for crictl version
	I1109 14:11:22.181392  228825 ssh_runner.go:195] Run: which crictl
	I1109 14:11:22.185037  228825 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:11:22.211718  228825 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:11:22.211791  228825 ssh_runner.go:195] Run: crio --version
	I1109 14:11:22.243562  228825 ssh_runner.go:195] Run: crio --version
	I1109 14:11:22.280768  228825 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1109 14:11:22.281917  228825 cli_runner.go:164] Run: docker network inspect old-k8s-version-169816 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:11:22.300791  228825 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1109 14:11:22.304843  228825 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:11:22.315419  228825 kubeadm.go:884] updating cluster {Name:old-k8s-version-169816 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-169816 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:11:22.315591  228825 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1109 14:11:22.315677  228825 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:11:22.347577  228825 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:11:22.347600  228825 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:11:22.347682  228825 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:11:22.371879  228825 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:11:22.371900  228825 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:11:22.371908  228825 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1109 14:11:22.372003  228825 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-169816 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-169816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:11:22.372083  228825 ssh_runner.go:195] Run: crio config
	I1109 14:11:22.416490  228825 cni.go:84] Creating CNI manager for ""
	I1109 14:11:22.416514  228825 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:11:22.416533  228825 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:11:22.416563  228825 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-169816 NodeName:old-k8s-version-169816 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:11:22.416754  228825 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-169816"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:11:22.416830  228825 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1109 14:11:22.424736  228825 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:11:22.424785  228825 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:11:22.432440  228825 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1109 14:11:22.444727  228825 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:11:22.462611  228825 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1109 14:11:22.476046  228825 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:11:22.479436  228825 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:11:22.489064  228825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:11:22.575301  228825 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:11:22.600211  228825 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816 for IP: 192.168.76.2
	I1109 14:11:22.600230  228825 certs.go:195] generating shared ca certs ...
	I1109 14:11:22.600248  228825 certs.go:227] acquiring lock for ca certs: {Name:mk1fbc7fb5aaf0e87090d75145f91c095ef07289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:22.600406  228825 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key
	I1109 14:11:22.600462  228825 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key
	I1109 14:11:22.600475  228825 certs.go:257] generating profile certs ...
	I1109 14:11:22.600540  228825 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.key
	I1109 14:11:22.600564  228825 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.crt with IP's: []
	I1109 14:11:23.031287  228825 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.crt ...
	I1109 14:11:23.031317  228825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.crt: {Name:mkcd9ed6dc69ce6a3d0b73e16bb6024020ba4fb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:23.031505  228825 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.key ...
	I1109 14:11:23.031522  228825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.key: {Name:mk5c2a6e8cf42bd3a0054b0d8d5450a14bdd8065 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:23.031633  228825 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.key.69cbadc6
	I1109 14:11:23.031668  228825 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.crt.69cbadc6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1109 14:11:23.378927  228825 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.crt.69cbadc6 ...
	I1109 14:11:23.378952  228825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.crt.69cbadc6: {Name:mkf01539571826156d06efee737dcce465207aa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:23.379094  228825 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.key.69cbadc6 ...
	I1109 14:11:23.379107  228825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.key.69cbadc6: {Name:mk5ebb8e8122a7b613d07eb43f310370bb8be779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:23.379181  228825 certs.go:382] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.crt.69cbadc6 -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.crt
	I1109 14:11:23.379250  228825 certs.go:386] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.key.69cbadc6 -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.key
	I1109 14:11:23.379302  228825 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/proxy-client.key
	I1109 14:11:23.379316  228825 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/proxy-client.crt with IP's: []
	I1109 14:11:23.411058  228825 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/proxy-client.crt ...
	I1109 14:11:23.411077  228825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/proxy-client.crt: {Name:mkbb13b175ee428d211c3094d183405bf8266158 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:23.411195  228825 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/proxy-client.key ...
	I1109 14:11:23.411207  228825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/proxy-client.key: {Name:mkc7ce65443711fdf9dfcd4d8a8a1af4c8a0c611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:23.411366  228825 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem (1338 bytes)
	W1109 14:11:23.411397  228825 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365_empty.pem, impossibly tiny 0 bytes
	I1109 14:11:23.411406  228825 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:11:23.411425  228825 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 14:11:23.411449  228825 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:11:23.411488  228825 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 14:11:23.411552  228825 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:11:23.412076  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:11:23.429774  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:11:23.446594  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:11:23.463608  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:11:23.480337  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1109 14:11:23.497892  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:11:23.515270  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:11:23.531595  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:11:23.547735  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:11:23.565239  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem --> /usr/share/ca-certificates/9365.pem (1338 bytes)
	I1109 14:11:23.580946  228825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /usr/share/ca-certificates/93652.pem (1708 bytes)
	I1109 14:11:23.597501  228825 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:11:23.609487  228825 ssh_runner.go:195] Run: openssl version
	I1109 14:11:23.615587  228825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93652.pem && ln -fs /usr/share/ca-certificates/93652.pem /etc/ssl/certs/93652.pem"
	I1109 14:11:23.623410  228825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93652.pem
	I1109 14:11:23.627912  228825 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:34 /usr/share/ca-certificates/93652.pem
	I1109 14:11:23.627972  228825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93652.pem
	I1109 14:11:23.665375  228825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93652.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:11:23.674009  228825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:11:23.682040  228825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:11:23.685589  228825 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:11:23.685649  228825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:11:23.719997  228825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:11:23.728856  228825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9365.pem && ln -fs /usr/share/ca-certificates/9365.pem /etc/ssl/certs/9365.pem"
	I1109 14:11:23.737617  228825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9365.pem
	I1109 14:11:23.741666  228825 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:34 /usr/share/ca-certificates/9365.pem
	I1109 14:11:23.741707  228825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9365.pem
	I1109 14:11:23.778705  228825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9365.pem /etc/ssl/certs/51391683.0"
	I1109 14:11:23.786902  228825 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:11:23.790285  228825 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:11:23.790332  228825 kubeadm.go:401] StartCluster: {Name:old-k8s-version-169816 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-169816 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:11:23.790411  228825 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:11:23.790471  228825 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:11:23.816796  228825 cri.go:89] found id: ""
	I1109 14:11:23.816853  228825 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:11:23.824246  228825 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:11:23.831729  228825 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:11:23.831768  228825 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:11:23.838867  228825 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:11:23.838883  228825 kubeadm.go:158] found existing configuration files:
	
	I1109 14:11:23.838921  228825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 14:11:23.846448  228825 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:11:23.846496  228825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:11:23.853883  228825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 14:11:23.860835  228825 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:11:23.860868  228825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:11:23.867683  228825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 14:11:23.874822  228825 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:11:23.874864  228825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:11:23.881489  228825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 14:11:23.888427  228825 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:11:23.888463  228825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:11:23.895121  228825 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:11:23.936477  228825 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1109 14:11:23.936554  228825 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:11:23.970427  228825 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 14:11:23.970521  228825 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1109 14:11:23.970567  228825 kubeadm.go:319] OS: Linux
	I1109 14:11:23.970656  228825 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 14:11:23.970716  228825 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 14:11:23.970783  228825 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 14:11:23.970853  228825 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 14:11:23.970922  228825 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 14:11:23.971016  228825 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 14:11:23.971107  228825 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 14:11:23.971175  228825 kubeadm.go:319] CGROUPS_IO: enabled
	I1109 14:11:24.037661  228825 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:11:24.037853  228825 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:11:24.038006  228825 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1109 14:11:24.170206  228825 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:11:19.336004  188127 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1109 14:11:19.336371  188127 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1109 14:11:19.336433  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1109 14:11:19.336490  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1109 14:11:19.365900  188127 cri.go:89] found id: "278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:19.365920  188127 cri.go:89] found id: ""
	I1109 14:11:19.365941  188127 logs.go:282] 1 containers: [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689]
	I1109 14:11:19.366005  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:19.369924  188127 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1109 14:11:19.369999  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1109 14:11:19.399890  188127 cri.go:89] found id: ""
	I1109 14:11:19.399959  188127 logs.go:282] 0 containers: []
	W1109 14:11:19.399977  188127 logs.go:284] No container was found matching "etcd"
	I1109 14:11:19.399985  188127 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1109 14:11:19.400041  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1109 14:11:19.429025  188127 cri.go:89] found id: ""
	I1109 14:11:19.429053  188127 logs.go:282] 0 containers: []
	W1109 14:11:19.429064  188127 logs.go:284] No container was found matching "coredns"
	I1109 14:11:19.429072  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1109 14:11:19.429127  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1109 14:11:19.463744  188127 cri.go:89] found id: "5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:19.463767  188127 cri.go:89] found id: ""
	I1109 14:11:19.463777  188127 logs.go:282] 1 containers: [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82]
	I1109 14:11:19.463831  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:19.468618  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1109 14:11:19.468707  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1109 14:11:19.500006  188127 cri.go:89] found id: ""
	I1109 14:11:19.500034  188127 logs.go:282] 0 containers: []
	W1109 14:11:19.500047  188127 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:11:19.500055  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1109 14:11:19.500122  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1109 14:11:19.529521  188127 cri.go:89] found id: "b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:19.529567  188127 cri.go:89] found id: ""
	I1109 14:11:19.529578  188127 logs.go:282] 1 containers: [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd]
	I1109 14:11:19.529659  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:19.533777  188127 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1109 14:11:19.533890  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1109 14:11:19.566677  188127 cri.go:89] found id: ""
	I1109 14:11:19.566703  188127 logs.go:282] 0 containers: []
	W1109 14:11:19.566712  188127 logs.go:284] No container was found matching "kindnet"
	I1109 14:11:19.566719  188127 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1109 14:11:19.566771  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1109 14:11:19.599269  188127 cri.go:89] found id: ""
	I1109 14:11:19.599294  188127 logs.go:282] 0 containers: []
	W1109 14:11:19.599304  188127 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:11:19.599315  188127 logs.go:123] Gathering logs for container status ...
	I1109 14:11:19.599334  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:11:19.630359  188127 logs.go:123] Gathering logs for kubelet ...
	I1109 14:11:19.630391  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:11:19.751209  188127 logs.go:123] Gathering logs for dmesg ...
	I1109 14:11:19.751243  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:11:19.768952  188127 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:11:19.768979  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:11:19.846808  188127 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:11:19.846830  188127 logs.go:123] Gathering logs for kube-apiserver [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689] ...
	I1109 14:11:19.846847  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:19.887373  188127 logs.go:123] Gathering logs for kube-scheduler [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82] ...
	I1109 14:11:19.887407  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:19.947158  188127 logs.go:123] Gathering logs for kube-controller-manager [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd] ...
	I1109 14:11:19.947184  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:19.977403  188127 logs.go:123] Gathering logs for CRI-O ...
	I1109 14:11:19.977428  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1109 14:11:22.531728  188127 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1109 14:11:22.532135  188127 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1109 14:11:22.532195  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1109 14:11:22.532252  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1109 14:11:22.559400  188127 cri.go:89] found id: "278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:22.559420  188127 cri.go:89] found id: ""
	I1109 14:11:22.559429  188127 logs.go:282] 1 containers: [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689]
	I1109 14:11:22.559483  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:22.563178  188127 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1109 14:11:22.563235  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1109 14:11:22.592509  188127 cri.go:89] found id: ""
	I1109 14:11:22.592533  188127 logs.go:282] 0 containers: []
	W1109 14:11:22.592543  188127 logs.go:284] No container was found matching "etcd"
	I1109 14:11:22.592550  188127 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1109 14:11:22.592595  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1109 14:11:22.622102  188127 cri.go:89] found id: ""
	I1109 14:11:22.622132  188127 logs.go:282] 0 containers: []
	W1109 14:11:22.622142  188127 logs.go:284] No container was found matching "coredns"
	I1109 14:11:22.622149  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1109 14:11:22.622203  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1109 14:11:22.656782  188127 cri.go:89] found id: "5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:22.656812  188127 cri.go:89] found id: ""
	I1109 14:11:22.656820  188127 logs.go:282] 1 containers: [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82]
	I1109 14:11:22.656872  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:22.660757  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1109 14:11:22.660809  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1109 14:11:22.687636  188127 cri.go:89] found id: ""
	I1109 14:11:22.687683  188127 logs.go:282] 0 containers: []
	W1109 14:11:22.687693  188127 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:11:22.687700  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1109 14:11:22.687756  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1109 14:11:22.712052  188127 cri.go:89] found id: "b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:22.712072  188127 cri.go:89] found id: ""
	I1109 14:11:22.712082  188127 logs.go:282] 1 containers: [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd]
	I1109 14:11:22.712130  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:22.715751  188127 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1109 14:11:22.715817  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1109 14:11:22.742570  188127 cri.go:89] found id: ""
	I1109 14:11:22.742590  188127 logs.go:282] 0 containers: []
	W1109 14:11:22.742598  188127 logs.go:284] No container was found matching "kindnet"
	I1109 14:11:22.742604  188127 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1109 14:11:22.742668  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1109 14:11:22.767247  188127 cri.go:89] found id: ""
	I1109 14:11:22.767272  188127 logs.go:282] 0 containers: []
	W1109 14:11:22.767281  188127 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:11:22.767291  188127 logs.go:123] Gathering logs for kube-controller-manager [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd] ...
	I1109 14:11:22.767304  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:22.791283  188127 logs.go:123] Gathering logs for CRI-O ...
	I1109 14:11:22.791309  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1109 14:11:22.850917  188127 logs.go:123] Gathering logs for container status ...
	I1109 14:11:22.850940  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:11:22.886988  188127 logs.go:123] Gathering logs for kubelet ...
	I1109 14:11:22.887015  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:11:22.982416  188127 logs.go:123] Gathering logs for dmesg ...
	I1109 14:11:22.982445  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:11:22.998366  188127 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:11:22.998392  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:11:23.066171  188127 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:11:23.066189  188127 logs.go:123] Gathering logs for kube-apiserver [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689] ...
	I1109 14:11:23.066202  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:23.097123  188127 logs.go:123] Gathering logs for kube-scheduler [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82] ...
	I1109 14:11:23.097150  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:24.173112  228825 out.go:252]   - Generating certificates and keys ...
	I1109 14:11:24.173183  228825 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:11:24.173278  228825 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	
	
	==> CRI-O <==
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.038698392Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.039560664Z" level=info msg="Conmon does support the --sync option"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.03957666Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.03959297Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.040527321Z" level=info msg="Conmon does support the --sync option"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.040543103Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.044546951Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.044573415Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.045278821Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.04565578Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.04571246Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.051086191Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.094374872Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-z82qd Namespace:kube-system ID:93c0a608a4ae12c35a0136500527c1034979b4c1cbfe35c62df719a055f3d559 UID:0bab7054-1d49-4279-9e6f-62c7dd91785d NetNS:/var/run/netns/d74dafa6-4d44-42ae-aaae-19938ef0f444 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00088a0e8}] Aliases:map[]}"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.094677998Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-z82qd for CNI network kindnet (type=ptp)"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.095212033Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.095237716Z" level=info msg="Starting seccomp notifier watcher"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.095478801Z" level=info msg="Create NRI interface"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.095609323Z" level=info msg="built-in NRI default validator is disabled"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.095620118Z" level=info msg="runtime interface created"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.09563396Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.095668686Z" level=info msg="runtime interface starting up..."
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.095676696Z" level=info msg="starting plugins..."
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.09569225Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 09 14:11:19 pause-092489 crio[2124]: time="2025-11-09T14:11:19.096070384Z" level=info msg="No systemd watchdog enabled"
	Nov 09 14:11:19 pause-092489 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	2b5b10f4f3f84       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   15 seconds ago      Running             coredns                   0                   93c0a608a4ae1       coredns-66bc5c9577-z82qd               kube-system
	f7663e0568a65       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   27 seconds ago      Running             kindnet-cni               0                   a0069e0c26e35       kindnet-h2j52                          kube-system
	e255085db448c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   27 seconds ago      Running             kube-proxy                0                   e0054db1a3e8e       kube-proxy-j62h5                       kube-system
	d330e1ae80e3a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   37 seconds ago      Running             kube-controller-manager   0                   b37fe04782f20       kube-controller-manager-pause-092489   kube-system
	8d4c0bf15d6f7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   37 seconds ago      Running             etcd                      0                   8645ee183cd07       etcd-pause-092489                      kube-system
	8d495cb1f952d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   37 seconds ago      Running             kube-scheduler            0                   181a7f639cb69       kube-scheduler-pause-092489            kube-system
	e21ada7ed93b3       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   37 seconds ago      Running             kube-apiserver            0                   c8dc402bb598d       kube-apiserver-pause-092489            kube-system
	
	
	==> coredns [2b5b10f4f3f84bb026e0851a309a45699c18e38c6152d977136df4d3d7ce824f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37501 - 34408 "HINFO IN 6770464144348760453.4471584213420131731. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.86208948s
	
	
	==> describe nodes <==
	Name:               pause-092489
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-092489
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=pause-092489
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_10_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:10:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-092489
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:11:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:11:10 +0000   Sun, 09 Nov 2025 14:10:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:11:10 +0000   Sun, 09 Nov 2025 14:10:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:11:10 +0000   Sun, 09 Nov 2025 14:10:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:11:10 +0000   Sun, 09 Nov 2025 14:11:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-092489
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                8ee20d3d-21db-4a7a-b9a3-995feff3a0bf
	  Boot ID:                    f61b57f8-a461-4dd6-b921-37a11bd88f0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-z82qd                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-pause-092489                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-h2j52                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-pause-092489             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-pause-092489    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-j62h5                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-pause-092489             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  38s (x8 over 39s)  kubelet          Node pause-092489 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x8 over 39s)  kubelet          Node pause-092489 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x8 over 39s)  kubelet          Node pause-092489 status is now: NodeHasSufficientPID
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s                kubelet          Node pause-092489 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s                kubelet          Node pause-092489 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s                kubelet          Node pause-092489 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node pause-092489 event: Registered Node pause-092489 in Controller
	  Normal  NodeReady                17s                kubelet          Node pause-092489 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.090313] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.571631] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 9 13:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.019189] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023897] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +2.047759] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +4.031604] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +8.447123] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[ +16.382306] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 13:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	
	
	==> etcd [8d4c0bf15d6f73166eefeb2a225847ad6656500266e6c4258eabfbaab89d2c19] <==
	{"level":"warn","ts":"2025-11-09T14:10:54.900835Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"255.399031ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-cidrs-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T14:10:54.900900Z","caller":"traceutil/trace.go:172","msg":"trace[1240050507] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-cidrs-controller; range_end:; response_count:0; response_revision:294; }","duration":"255.477224ms","start":"2025-11-09T14:10:54.645404Z","end":"2025-11-09T14:10:54.900881Z","steps":["trace[1240050507] 'agreement among raft nodes before linearized reading'  (duration: 127.607031ms)","trace[1240050507] 'range keys from in-memory index tree'  (duration: 127.75586ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:10:54.901270Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.948065ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789902661887140 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/kubeadm:node-proxier\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/kubeadm:node-proxier\" value_size:362 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-09T14:10:54.901352Z","caller":"traceutil/trace.go:172","msg":"trace[1112341552] transaction","detail":"{read_only:false; response_revision:295; number_of_response:1; }","duration":"256.425224ms","start":"2025-11-09T14:10:54.644906Z","end":"2025-11-09T14:10:54.901331Z","steps":["trace[1112341552] 'process raft request'  (duration: 128.041274ms)","trace[1112341552] 'compare'  (duration: 127.848172ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:10:54.901452Z","caller":"traceutil/trace.go:172","msg":"trace[1794827297] transaction","detail":"{read_only:false; response_revision:296; number_of_response:1; }","duration":"187.165732ms","start":"2025-11-09T14:10:54.714227Z","end":"2025-11-09T14:10:54.901393Z","steps":["trace[1794827297] 'process raft request'  (duration: 187.107327ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:10:55.031234Z","caller":"traceutil/trace.go:172","msg":"trace[1177562587] linearizableReadLoop","detail":"{readStateIndex:303; appliedIndex:303; }","duration":"128.20273ms","start":"2025-11-09T14:10:54.903011Z","end":"2025-11-09T14:10:55.031214Z","steps":["trace[1177562587] 'read index received'  (duration: 128.190858ms)","trace[1177562587] 'applied index is now lower than readState.Index'  (duration: 10.229µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:10:55.086019Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.981051ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-11-09T14:10:55.086060Z","caller":"traceutil/trace.go:172","msg":"trace[181611840] transaction","detail":"{read_only:false; number_of_response:0; response_revision:296; }","duration":"238.706431ms","start":"2025-11-09T14:10:54.847350Z","end":"2025-11-09T14:10:55.086056Z","steps":["trace[181611840] 'process raft request'  (duration: 238.623412ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:10:55.086085Z","caller":"traceutil/trace.go:172","msg":"trace[1175370108] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:296; }","duration":"183.063266ms","start":"2025-11-09T14:10:54.903006Z","end":"2025-11-09T14:10:55.086069Z","steps":["trace[1175370108] 'agreement among raft nodes before linearized reading'  (duration: 128.285332ms)","trace[1175370108] 'range keys from in-memory index tree'  (duration: 54.594653ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:10:55.086075Z","caller":"traceutil/trace.go:172","msg":"trace[941473842] transaction","detail":"{read_only:false; number_of_response:0; response_revision:296; }","duration":"238.785858ms","start":"2025-11-09T14:10:54.847261Z","end":"2025-11-09T14:10:55.086047Z","steps":["trace[941473842] 'process raft request'  (duration: 184.038655ms)","trace[941473842] 'compare'  (duration: 54.625983ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:10:55.086027Z","caller":"traceutil/trace.go:172","msg":"trace[311800633] transaction","detail":"{read_only:false; number_of_response:0; response_revision:296; }","duration":"238.652712ms","start":"2025-11-09T14:10:54.847363Z","end":"2025-11-09T14:10:55.086016Z","steps":["trace[311800633] 'process raft request'  (duration: 238.634412ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:10:55.151281Z","caller":"traceutil/trace.go:172","msg":"trace[1761511854] transaction","detail":"{read_only:false; response_revision:298; number_of_response:1; }","duration":"236.179117ms","start":"2025-11-09T14:10:54.915092Z","end":"2025-11-09T14:10:55.151271Z","steps":["trace[1761511854] 'process raft request'  (duration: 236.13816ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:10:55.151314Z","caller":"traceutil/trace.go:172","msg":"trace[1898559928] transaction","detail":"{read_only:false; response_revision:297; number_of_response:1; }","duration":"247.363614ms","start":"2025-11-09T14:10:54.903933Z","end":"2025-11-09T14:10:55.151297Z","steps":["trace[1898559928] 'process raft request'  (duration: 247.219368ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:10:55.383587Z","caller":"traceutil/trace.go:172","msg":"trace[716513403] linearizableReadLoop","detail":"{readStateIndex:311; appliedIndex:311; }","duration":"124.841082ms","start":"2025-11-09T14:10:55.258728Z","end":"2025-11-09T14:10:55.383569Z","steps":["trace[716513403] 'read index received'  (duration: 124.835682ms)","trace[716513403] 'applied index is now lower than readState.Index'  (duration: 4.699µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:10:55.515748Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"256.997924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" limit:1 ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2025-11-09T14:10:55.515803Z","caller":"traceutil/trace.go:172","msg":"trace[1444622275] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/generic-garbage-collector; range_end:; response_count:1; response_revision:301; }","duration":"257.070167ms","start":"2025-11-09T14:10:55.258719Z","end":"2025-11-09T14:10:55.515789Z","steps":["trace[1444622275] 'agreement among raft nodes before linearized reading'  (duration: 124.942372ms)","trace[1444622275] 'range keys from in-memory index tree'  (duration: 132.012742ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:10:55.515998Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.134417ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789902661887158 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-092489\" mod_revision:276 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-092489\" value_size:7412 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-092489\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-09T14:10:55.516068Z","caller":"traceutil/trace.go:172","msg":"trace[636778214] transaction","detail":"{read_only:false; response_revision:302; number_of_response:1; }","duration":"259.194564ms","start":"2025-11-09T14:10:55.256861Z","end":"2025-11-09T14:10:55.516056Z","steps":["trace[636778214] 'process raft request'  (duration: 126.802933ms)","trace[636778214] 'compare'  (duration: 132.051689ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:10:55.655201Z","caller":"traceutil/trace.go:172","msg":"trace[1473739482] linearizableReadLoop","detail":"{readStateIndex:313; appliedIndex:313; }","duration":"117.033393ms","start":"2025-11-09T14:10:55.538150Z","end":"2025-11-09T14:10:55.655184Z","steps":["trace[1473739482] 'read index received'  (duration: 117.028118ms)","trace[1473739482] 'applied index is now lower than readState.Index'  (duration: 4.405µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:10:55.655312Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.141788ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/disruption-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T14:10:55.655344Z","caller":"traceutil/trace.go:172","msg":"trace[1773379853] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/disruption-controller; range_end:; response_count:0; response_revision:303; }","duration":"117.19192ms","start":"2025-11-09T14:10:55.538143Z","end":"2025-11-09T14:10:55.655335Z","steps":["trace[1773379853] 'agreement among raft nodes before linearized reading'  (duration: 117.103442ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:10:55.655395Z","caller":"traceutil/trace.go:172","msg":"trace[1620174108] transaction","detail":"{read_only:false; response_revision:304; number_of_response:1; }","duration":"132.585952ms","start":"2025-11-09T14:10:55.522797Z","end":"2025-11-09T14:10:55.655383Z","steps":["trace[1620174108] 'process raft request'  (duration: 132.449248ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:10:55.936924Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.597114ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789902661887171 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/disruption-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/disruption-controller\" value_size:126 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-09T14:10:55.937017Z","caller":"traceutil/trace.go:172","msg":"trace[544720244] transaction","detail":"{read_only:false; response_revision:305; number_of_response:1; }","duration":"277.072121ms","start":"2025-11-09T14:10:55.659929Z","end":"2025-11-09T14:10:55.937002Z","steps":["trace[544720244] 'process raft request'  (duration: 129.319683ms)","trace[544720244] 'compare'  (duration: 147.429635ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:10:55.937528Z","caller":"traceutil/trace.go:172","msg":"trace[1630939464] transaction","detail":"{read_only:false; response_revision:306; number_of_response:1; }","duration":"275.298122ms","start":"2025-11-09T14:10:55.662215Z","end":"2025-11-09T14:10:55.937513Z","steps":["trace[1630939464] 'process raft request'  (duration: 275.12276ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:11:27 up 53 min,  0 user,  load average: 4.14, 2.96, 1.75
	Linux pause-092489 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f7663e0568a65ffaaf3c55965b13b778d4f8efa35939c24622c19d8f69d183fa] <==
	I1109 14:11:00.067060       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:11:00.115122       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1109 14:11:00.115282       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:11:00.115306       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:11:00.115330       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:11:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:11:00.415066       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:11:00.415387       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:11:00.415456       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:11:00.415702       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1109 14:11:00.816556       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:11:00.816591       1 metrics.go:72] Registering metrics
	I1109 14:11:00.816659       1 controller.go:711] "Syncing nftables rules"
	I1109 14:11:10.317730       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1109 14:11:10.317799       1 main.go:301] handling current node
	I1109 14:11:20.324721       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1109 14:11:20.324760       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e21ada7ed93b3e0e01d2725059c4155c569e0c4090e0e3079f02bc81d5274700] <==
	I1109 14:10:51.445844       1 policy_source.go:240] refreshing policies
	E1109 14:10:51.453617       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1109 14:10:51.500993       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:10:51.523289       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:10:51.523486       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1109 14:10:51.529570       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:10:51.530169       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 14:10:51.615789       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:10:52.303599       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1109 14:10:52.307483       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1109 14:10:52.307501       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:10:52.753066       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:10:52.788470       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:10:52.909413       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1109 14:10:52.915278       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1109 14:10:52.916275       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:10:52.920781       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:10:53.326493       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:10:54.175261       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:10:54.331429       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1109 14:10:54.387904       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 14:10:58.428845       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:10:58.432148       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:10:58.777758       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1109 14:10:59.428781       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d330e1ae80e3a30bfdc4b113f673bf973656704d0579d016dd61b77458e2c396] <==
	I1109 14:10:58.317698       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:10:58.317712       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:10:58.317719       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 14:10:58.325892       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 14:10:58.325916       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1109 14:10:58.325923       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1109 14:10:58.325942       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1109 14:10:58.325971       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1109 14:10:58.326021       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 14:10:58.326113       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1109 14:10:58.327120       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1109 14:10:58.327146       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 14:10:58.329295       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1109 14:10:58.330467       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1109 14:10:58.330485       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1109 14:10:58.330530       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1109 14:10:58.330586       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1109 14:10:58.330598       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1109 14:10:58.330604       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1109 14:10:58.331681       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:10:58.336386       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-092489" podCIDRs=["10.244.0.0/24"]
	I1109 14:10:58.338474       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 14:10:58.342677       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:10:58.350032       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:11:13.277997       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e255085db448c22caf3ee1ab534518079a741907de0468fc851c20ed70ff553a] <==
	I1109 14:10:59.889455       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:10:59.964869       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:11:00.067517       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:11:00.067557       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1109 14:11:00.067683       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:11:00.085625       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:11:00.085701       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:11:00.090485       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:11:00.090834       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:11:00.090861       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:11:00.092191       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:11:00.092215       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:11:00.092228       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:11:00.092239       1 config.go:200] "Starting service config controller"
	I1109 14:11:00.092251       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:11:00.092257       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:11:00.092345       1 config.go:309] "Starting node config controller"
	I1109 14:11:00.092353       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:11:00.092360       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:11:00.192338       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:11:00.192428       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:11:00.192528       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8d495cb1f952ddb415c3231fdd998bdecdb92da5f5d74d99731326c32330b72d] <==
	E1109 14:10:51.377047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 14:10:51.377223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 14:10:51.377372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 14:10:51.377401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 14:10:51.377606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 14:10:51.377670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 14:10:51.377662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 14:10:51.377691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 14:10:51.377751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 14:10:51.377805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 14:10:51.377831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 14:10:51.377954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 14:10:51.377932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 14:10:51.378063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 14:10:51.378078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 14:10:51.378127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 14:10:52.207947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 14:10:52.215047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 14:10:52.325746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1109 14:10:52.388877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 14:10:52.434228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 14:10:52.556811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 14:10:52.559834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 14:10:52.587196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1109 14:10:55.475177       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:11:16 pause-092489 kubelet[1285]: E1109 14:11:16.889366    1285 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 09 14:11:16 pause-092489 kubelet[1285]: E1109 14:11:16.889428    1285 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:16 pause-092489 kubelet[1285]: E1109 14:11:16.889445    1285 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:16 pause-092489 kubelet[1285]: W1109 14:11:16.989772    1285 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 09 14:11:17 pause-092489 kubelet[1285]: W1109 14:11:17.172775    1285 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 09 14:11:17 pause-092489 kubelet[1285]: W1109 14:11:17.450302    1285 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 09 14:11:17 pause-092489 kubelet[1285]: W1109 14:11:17.802466    1285 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 09 14:11:17 pause-092489 kubelet[1285]: E1109 14:11:17.834919    1285 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Nov 09 14:11:17 pause-092489 kubelet[1285]: E1109 14:11:17.835012    1285 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:17 pause-092489 kubelet[1285]: E1109 14:11:17.835028    1285 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:17 pause-092489 kubelet[1285]: E1109 14:11:17.835039    1285 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:17 pause-092489 kubelet[1285]: E1109 14:11:17.890528    1285 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 09 14:11:17 pause-092489 kubelet[1285]: E1109 14:11:17.890585    1285 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:17 pause-092489 kubelet[1285]: E1109 14:11:17.890597    1285 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:18 pause-092489 kubelet[1285]: W1109 14:11:18.332609    1285 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 09 14:11:18 pause-092489 kubelet[1285]: E1109 14:11:18.844787    1285 log.go:32] "Status from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:18 pause-092489 kubelet[1285]: E1109 14:11:18.844839    1285 kubelet.go:2996] "Container runtime sanity check failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:18 pause-092489 kubelet[1285]: E1109 14:11:18.891422    1285 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 09 14:11:18 pause-092489 kubelet[1285]: E1109 14:11:18.891476    1285 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:18 pause-092489 kubelet[1285]: E1109 14:11:18.891488    1285 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 09 14:11:22 pause-092489 kubelet[1285]: I1109 14:11:22.543185    1285 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 09 14:11:22 pause-092489 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:11:22 pause-092489 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:11:22 pause-092489 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 09 14:11:22 pause-092489 systemd[1]: kubelet.service: Consumed 1.131s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-092489 -n pause-092489
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-092489 -n pause-092489: exit status 2 (321.53438ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-092489 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-169816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-169816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (232.618779ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:12:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-169816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-169816 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-169816 describe deploy/metrics-server -n kube-system: exit status 1 (55.89656ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-169816 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-169816
helpers_test.go:243: (dbg) docker inspect old-k8s-version-169816:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9",
	        "Created": "2025-11-09T14:11:19.114933288Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 230059,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:11:19.144998134Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9/hostname",
	        "HostsPath": "/var/lib/docker/containers/7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9/hosts",
	        "LogPath": "/var/lib/docker/containers/7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9/7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9-json.log",
	        "Name": "/old-k8s-version-169816",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-169816:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-169816",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9",
	                "LowerDir": "/var/lib/docker/overlay2/a9f911d75fdbe90920747d9ae87079b6baa03d141bcceb4514f616f7af7bafda-init/diff:/var/lib/docker/overlay2/d0c6d4b4ffbfb6af64ec3408030fc54111fe30d500088a5ae947d598cfc72f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a9f911d75fdbe90920747d9ae87079b6baa03d141bcceb4514f616f7af7bafda/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a9f911d75fdbe90920747d9ae87079b6baa03d141bcceb4514f616f7af7bafda/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a9f911d75fdbe90920747d9ae87079b6baa03d141bcceb4514f616f7af7bafda/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-169816",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-169816/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-169816",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-169816",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-169816",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "92fb4f512138d43469d60b93cf04af516931f8feee66efc5e771cf32fdd02b47",
	            "SandboxKey": "/var/run/docker/netns/92fb4f512138",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-169816": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:1c:ae:56:db:26",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f0ef03f929b33a2352ddcf362b70e81410120fda868115e956b4bb456ca7cf63",
	                    "EndpointID": "a274479b24b21a0fbbf32a0cfccf669506c10024dd696734b903a36092e4731b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-169816",
	                        "7b32476bd090"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169816 -n old-k8s-version-169816
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-169816 logs -n 25
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-593530 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo containerd config dump                                                                                                                                                                                                  │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo crio config                                                                                                                                                                                                             │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ delete  │ -p cilium-593530                                                                                                                                                                                                                              │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │ 09 Nov 25 14:10 UTC │
	│ start   │ -p cert-options-350702 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-350702    │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │ 09 Nov 25 14:11 UTC │
	│ ssh     │ cert-options-350702 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-350702    │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ ssh     │ -p cert-options-350702 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-350702    │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ delete  │ -p cert-options-350702                                                                                                                                                                                                                        │ cert-options-350702    │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ start   │ -p pause-092489 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-092489           │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ start   │ -p old-k8s-version-169816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-169816 │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:12 UTC │
	│ pause   │ -p pause-092489 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-092489           │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │                     │
	│ delete  │ -p pause-092489                                                                                                                                                                                                                               │ pause-092489           │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ start   │ -p no-preload-152932 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-152932      │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-169816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-169816 │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:11:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:11:30.464930  234584 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:11:30.465055  234584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:11:30.465062  234584 out.go:374] Setting ErrFile to fd 2...
	I1109 14:11:30.465068  234584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:11:30.465376  234584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:11:30.466038  234584 out.go:368] Setting JSON to false
	I1109 14:11:30.469409  234584 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3240,"bootTime":1762694250,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:11:30.469532  234584 start.go:143] virtualization: kvm guest
	I1109 14:11:30.476739  234584 out.go:179] * [no-preload-152932] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:11:30.478279  234584 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:11:30.478382  234584 notify.go:221] Checking for updates...
	I1109 14:11:30.480476  234584 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:11:30.481473  234584 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:11:30.482691  234584 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 14:11:30.483755  234584 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:11:30.485906  234584 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:11:30.487690  234584 config.go:182] Loaded profile config "cert-expiration-883873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:11:30.487835  234584 config.go:182] Loaded profile config "kubernetes-upgrade-755159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:11:30.487961  234584 config.go:182] Loaded profile config "old-k8s-version-169816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1109 14:11:30.488084  234584 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:11:30.518087  234584 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 14:11:30.518219  234584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:11:30.587444  234584 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-09 14:11:30.569753828 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:11:30.587616  234584 docker.go:319] overlay module found
	I1109 14:11:30.589989  234584 out.go:179] * Using the docker driver based on user configuration
	I1109 14:11:30.591091  234584 start.go:309] selected driver: docker
	I1109 14:11:30.591128  234584 start.go:930] validating driver "docker" against <nil>
	I1109 14:11:30.591164  234584 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:11:30.592028  234584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:11:30.679441  234584 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-09 14:11:30.667142641 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:11:30.679715  234584 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 14:11:30.680004  234584 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:11:30.681845  234584 out.go:179] * Using Docker driver with root privileges
	I1109 14:11:30.683293  234584 cni.go:84] Creating CNI manager for ""
	I1109 14:11:30.683374  234584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:11:30.683385  234584 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 14:11:30.683458  234584 start.go:353] cluster config:
	{Name:no-preload-152932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-152932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:11:30.685077  234584 out.go:179] * Starting "no-preload-152932" primary control-plane node in "no-preload-152932" cluster
	I1109 14:11:30.686306  234584 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:11:30.687388  234584 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:11:30.688505  234584 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:11:30.688540  234584 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:11:30.688634  234584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/config.json ...
	I1109 14:11:30.688702  234584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/config.json: {Name:mk2ca482d3693e5ab6739c62d2a8322c3f3159b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:30.688808  234584 cache.go:107] acquiring lock: {Name:mkd68504ca413aff019d310e5c445f41a315e6a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:11:30.688809  234584 cache.go:107] acquiring lock: {Name:mkb1b8f40c84ca3766e07a50ff76656add736750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:11:30.688884  234584 cache.go:115] /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1109 14:11:30.688898  234584 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 102.686µs
	I1109 14:11:30.688899  234584 cache.go:107] acquiring lock: {Name:mkfa78a6b46737528738821cf35090bbc6115c3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:11:30.688920  234584 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1109 14:11:30.688936  234584 cache.go:107] acquiring lock: {Name:mk15e16ba6cb2d53c06d47c8ab415c528b1c8cd1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:11:30.688943  234584 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1109 14:11:30.688986  234584 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1109 14:11:30.689034  234584 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1109 14:11:30.689217  234584 cache.go:107] acquiring lock: {Name:mkb54c6892d4064accaba19b6b081c2aca69ac6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:11:30.689214  234584 cache.go:107] acquiring lock: {Name:mk10c1178cc858d64f7aac6f6ec576bb5853ec67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:11:30.689384  234584 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1109 14:11:30.689461  234584 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1109 14:11:30.689261  234584 cache.go:107] acquiring lock: {Name:mk7995b4bed49b05647cdaf490d71999d6fcacd1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:11:30.689229  234584 cache.go:107] acquiring lock: {Name:mk6204ee63940a3f0deb65f69a9580b5477e12d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:11:30.689770  234584 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1109 14:11:30.689868  234584 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1109 14:11:30.690395  234584 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1109 14:11:30.690535  234584 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1109 14:11:30.690395  234584 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1109 14:11:30.690740  234584 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1109 14:11:30.691018  234584 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1109 14:11:30.691054  234584 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1109 14:11:30.691057  234584 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1109 14:11:30.716017  234584 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:11:30.716037  234584 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:11:30.716055  234584 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:11:30.716091  234584 start.go:360] acquireMachinesLock for no-preload-152932: {Name:mk346e7b372bd21721b118e7adf0c9f2990bf4d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:11:30.716193  234584 start.go:364] duration metric: took 82.378µs to acquireMachinesLock for "no-preload-152932"
	I1109 14:11:30.716221  234584 start.go:93] Provisioning new machine with config: &{Name:no-preload-152932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-152932 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:11:30.716310  234584 start.go:125] createHost starting for "" (driver="docker")
	I1109 14:11:32.357463  228825 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.502908 seconds
	I1109 14:11:32.357621  228825 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 14:11:32.380259  228825 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 14:11:32.907774  228825 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 14:11:32.907955  228825 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-169816 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 14:11:33.418516  228825 kubeadm.go:319] [bootstrap-token] Using token: 92ioso.n679f570x7awb92d
	I1109 14:11:33.419793  228825 out.go:252]   - Configuring RBAC rules ...
	I1109 14:11:33.419956  228825 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 14:11:33.424825  228825 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 14:11:33.430302  228825 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 14:11:33.432622  228825 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 14:11:33.435107  228825 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 14:11:33.438028  228825 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 14:11:33.446176  228825 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 14:11:33.623614  228825 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 14:11:33.828251  228825 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 14:11:33.829565  228825 kubeadm.go:319] 
	I1109 14:11:33.829676  228825 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 14:11:33.829688  228825 kubeadm.go:319] 
	I1109 14:11:33.829769  228825 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 14:11:33.829778  228825 kubeadm.go:319] 
	I1109 14:11:33.829838  228825 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 14:11:33.829923  228825 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 14:11:33.830002  228825 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 14:11:33.830011  228825 kubeadm.go:319] 
	I1109 14:11:33.830084  228825 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 14:11:33.830094  228825 kubeadm.go:319] 
	I1109 14:11:33.830159  228825 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 14:11:33.830178  228825 kubeadm.go:319] 
	I1109 14:11:33.830271  228825 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 14:11:33.830357  228825 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 14:11:33.830443  228825 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 14:11:33.830453  228825 kubeadm.go:319] 
	I1109 14:11:33.830551  228825 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 14:11:33.830633  228825 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 14:11:33.830670  228825 kubeadm.go:319] 
	I1109 14:11:33.830784  228825 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 92ioso.n679f570x7awb92d \
	I1109 14:11:33.830936  228825 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f \
	I1109 14:11:33.830979  228825 kubeadm.go:319] 	--control-plane 
	I1109 14:11:33.830996  228825 kubeadm.go:319] 
	I1109 14:11:33.831099  228825 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 14:11:33.831107  228825 kubeadm.go:319] 
	I1109 14:11:33.831222  228825 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 92ioso.n679f570x7awb92d \
	I1109 14:11:33.831328  228825 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f 
	I1109 14:11:33.833199  228825 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1109 14:11:33.833303  228825 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 14:11:33.833322  228825 cni.go:84] Creating CNI manager for ""
	I1109 14:11:33.833332  228825 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:11:33.834732  228825 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1109 14:11:29.203167  188127 logs.go:123] Gathering logs for kube-controller-manager [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd] ...
	I1109 14:11:29.203190  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:29.229891  188127 logs.go:123] Gathering logs for CRI-O ...
	I1109 14:11:29.229921  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1109 14:11:29.279210  188127 logs.go:123] Gathering logs for container status ...
	I1109 14:11:29.279237  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:11:29.312689  188127 logs.go:123] Gathering logs for kubelet ...
	I1109 14:11:29.312717  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:11:31.912697  188127 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1109 14:11:31.913070  188127 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1109 14:11:31.913121  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1109 14:11:31.913165  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1109 14:11:31.964031  188127 cri.go:89] found id: "278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:31.964178  188127 cri.go:89] found id: ""
	I1109 14:11:31.964236  188127 logs.go:282] 1 containers: [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689]
	I1109 14:11:31.964354  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:31.970106  188127 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1109 14:11:31.970294  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1109 14:11:32.011286  188127 cri.go:89] found id: ""
	I1109 14:11:32.011383  188127 logs.go:282] 0 containers: []
	W1109 14:11:32.011423  188127 logs.go:284] No container was found matching "etcd"
	I1109 14:11:32.011432  188127 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1109 14:11:32.011501  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1109 14:11:32.048238  188127 cri.go:89] found id: ""
	I1109 14:11:32.048263  188127 logs.go:282] 0 containers: []
	W1109 14:11:32.048272  188127 logs.go:284] No container was found matching "coredns"
	I1109 14:11:32.048280  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1109 14:11:32.048331  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1109 14:11:32.121214  188127 cri.go:89] found id: "5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:32.121472  188127 cri.go:89] found id: ""
	I1109 14:11:32.121488  188127 logs.go:282] 1 containers: [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82]
	I1109 14:11:32.121768  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:32.126801  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1109 14:11:32.126856  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1109 14:11:32.165273  188127 cri.go:89] found id: ""
	I1109 14:11:32.165498  188127 logs.go:282] 0 containers: []
	W1109 14:11:32.165529  188127 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:11:32.165548  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1109 14:11:32.165626  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1109 14:11:32.199620  188127 cri.go:89] found id: "b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:32.199715  188127 cri.go:89] found id: ""
	I1109 14:11:32.199731  188127 logs.go:282] 1 containers: [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd]
	I1109 14:11:32.199802  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:32.206008  188127 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1109 14:11:32.206073  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1109 14:11:32.235215  188127 cri.go:89] found id: ""
	I1109 14:11:32.235240  188127 logs.go:282] 0 containers: []
	W1109 14:11:32.235260  188127 logs.go:284] No container was found matching "kindnet"
	I1109 14:11:32.235268  188127 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1109 14:11:32.235330  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1109 14:11:32.263769  188127 cri.go:89] found id: ""
	I1109 14:11:32.263789  188127 logs.go:282] 0 containers: []
	W1109 14:11:32.263797  188127 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:11:32.263805  188127 logs.go:123] Gathering logs for kube-controller-manager [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd] ...
	I1109 14:11:32.263816  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:32.290292  188127 logs.go:123] Gathering logs for CRI-O ...
	I1109 14:11:32.290316  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1109 14:11:32.349187  188127 logs.go:123] Gathering logs for container status ...
	I1109 14:11:32.349228  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:11:32.394793  188127 logs.go:123] Gathering logs for kubelet ...
	I1109 14:11:32.394831  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:11:32.496337  188127 logs.go:123] Gathering logs for dmesg ...
	I1109 14:11:32.496365  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:11:32.510403  188127 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:11:32.510431  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:11:32.599225  188127 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:11:32.599246  188127 logs.go:123] Gathering logs for kube-apiserver [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689] ...
	I1109 14:11:32.599273  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:32.639843  188127 logs.go:123] Gathering logs for kube-scheduler [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82] ...
	I1109 14:11:32.639870  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:33.836163  228825 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 14:11:33.840501  228825 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1109 14:11:33.840518  228825 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1109 14:11:33.854338  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 14:11:30.718515  234584 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1109 14:11:30.718753  234584 start.go:159] libmachine.API.Create for "no-preload-152932" (driver="docker")
	I1109 14:11:30.718787  234584 client.go:173] LocalClient.Create starting
	I1109 14:11:30.718864  234584 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem
	I1109 14:11:30.718905  234584 main.go:143] libmachine: Decoding PEM data...
	I1109 14:11:30.718959  234584 main.go:143] libmachine: Parsing certificate...
	I1109 14:11:30.719027  234584 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem
	I1109 14:11:30.719056  234584 main.go:143] libmachine: Decoding PEM data...
	I1109 14:11:30.719071  234584 main.go:143] libmachine: Parsing certificate...
	I1109 14:11:30.719424  234584 cli_runner.go:164] Run: docker network inspect no-preload-152932 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 14:11:30.741231  234584 cli_runner.go:211] docker network inspect no-preload-152932 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 14:11:30.741303  234584 network_create.go:284] running [docker network inspect no-preload-152932] to gather additional debugging logs...
	I1109 14:11:30.741326  234584 cli_runner.go:164] Run: docker network inspect no-preload-152932
	W1109 14:11:30.761358  234584 cli_runner.go:211] docker network inspect no-preload-152932 returned with exit code 1
	I1109 14:11:30.761391  234584 network_create.go:287] error running [docker network inspect no-preload-152932]: docker network inspect no-preload-152932: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-152932 not found
	I1109 14:11:30.761405  234584 network_create.go:289] output of [docker network inspect no-preload-152932]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-152932 not found
	
	** /stderr **
	I1109 14:11:30.761489  234584 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:11:30.781355  234584 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c085420cd01a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:dd:1a:ae:de:73} reservation:<nil>}
	I1109 14:11:30.782095  234584 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-227a1511ff5a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4a:00:84:88:a9:17} reservation:<nil>}
	I1109 14:11:30.782756  234584 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9196665a99b4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:6c:f7:d0:28:4f} reservation:<nil>}
	I1109 14:11:30.783216  234584 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f0ef03f929b3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e6:cd:f4:b2:ad:24} reservation:<nil>}
	I1109 14:11:30.783663  234584 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-f74518934890 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a2:f1:d9:01:be:30} reservation:<nil>}
	I1109 14:11:30.784227  234584 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-3e98522bae0a IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:46:01:14:3d:94:7b} reservation:<nil>}
	I1109 14:11:30.785137  234584 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e91040}
	I1109 14:11:30.785167  234584 network_create.go:124] attempt to create docker network no-preload-152932 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1109 14:11:30.785210  234584 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-152932 no-preload-152932
	I1109 14:11:30.829752  234584 cache.go:162] opening:  /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1109 14:11:30.851401  234584 network_create.go:108] docker network no-preload-152932 192.168.103.0/24 created
	I1109 14:11:30.851423  234584 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-152932" container
	I1109 14:11:30.851471  234584 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 14:11:30.852047  234584 cache.go:162] opening:  /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1109 14:11:30.856855  234584 cache.go:162] opening:  /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1109 14:11:30.857199  234584 cache.go:162] opening:  /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1109 14:11:30.861355  234584 cache.go:162] opening:  /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1109 14:11:30.866287  234584 cache.go:162] opening:  /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1109 14:11:30.871521  234584 cli_runner.go:164] Run: docker volume create no-preload-152932 --label name.minikube.sigs.k8s.io=no-preload-152932 --label created_by.minikube.sigs.k8s.io=true
	I1109 14:11:30.889864  234584 oci.go:103] Successfully created a docker volume no-preload-152932
	I1109 14:11:30.889913  234584 cli_runner.go:164] Run: docker run --rm --name no-preload-152932-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-152932 --entrypoint /usr/bin/test -v no-preload-152932:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 14:11:30.890856  234584 cache.go:162] opening:  /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1109 14:11:30.960674  234584 cache.go:157] /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1109 14:11:30.960698  234584 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 271.543814ms
	I1109 14:11:30.960710  234584 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1109 14:11:31.158411  234584 cache.go:157] /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1109 14:11:31.158436  234584 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 469.212351ms
	I1109 14:11:31.158448  234584 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1109 14:11:31.317183  234584 oci.go:107] Successfully prepared a docker volume no-preload-152932
	I1109 14:11:31.317221  234584 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1109 14:11:31.317284  234584 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1109 14:11:31.317311  234584 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1109 14:11:31.317346  234584 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 14:11:31.371878  234584 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-152932 --name no-preload-152932 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-152932 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-152932 --network no-preload-152932 --ip 192.168.103.2 --volume no-preload-152932:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 14:11:31.690768  234584 cli_runner.go:164] Run: docker container inspect no-preload-152932 --format={{.State.Running}}
	I1109 14:11:31.713289  234584 cli_runner.go:164] Run: docker container inspect no-preload-152932 --format={{.State.Status}}
	I1109 14:11:31.736563  234584 cli_runner.go:164] Run: docker exec no-preload-152932 stat /var/lib/dpkg/alternatives/iptables
	I1109 14:11:31.795105  234584 oci.go:144] the created container "no-preload-152932" has a running status.
	I1109 14:11:31.795135  234584 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/no-preload-152932/id_rsa...
	I1109 14:11:32.002481  234584 cache.go:157] /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1109 14:11:32.002512  234584 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.313577898s
	I1109 14:11:32.002526  234584 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1109 14:11:32.211521  234584 cache.go:157] /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1109 14:11:32.211568  234584 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.522765861s
	I1109 14:11:32.211583  234584 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1109 14:11:32.330579  234584 cache.go:157] /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1109 14:11:32.330617  234584 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.641722555s
	I1109 14:11:32.330636  234584 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1109 14:11:32.594125  234584 cache.go:157] /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1109 14:11:32.594166  234584 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 1.904952147s
	I1109 14:11:32.594185  234584 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1109 14:11:32.703573  234584 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-5854/.minikube/machines/no-preload-152932/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 14:11:32.728823  234584 cli_runner.go:164] Run: docker container inspect no-preload-152932 --format={{.State.Status}}
	I1109 14:11:32.744275  234584 cache.go:157] /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1109 14:11:32.744308  234584 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 2.055050724s
	I1109 14:11:32.744324  234584 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1109 14:11:32.744341  234584 cache.go:87] Successfully saved all images to host disk.
	I1109 14:11:32.747088  234584 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 14:11:32.747109  234584 kic_runner.go:114] Args: [docker exec --privileged no-preload-152932 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 14:11:32.792847  234584 cli_runner.go:164] Run: docker container inspect no-preload-152932 --format={{.State.Status}}
	I1109 14:11:32.810631  234584 machine.go:94] provisionDockerMachine start ...
	I1109 14:11:32.810730  234584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-152932
	I1109 14:11:32.827151  234584 main.go:143] libmachine: Using SSH client type: native
	I1109 14:11:32.827413  234584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1109 14:11:32.827429  234584 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:11:32.952935  234584 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-152932
	
	I1109 14:11:32.952959  234584 ubuntu.go:182] provisioning hostname "no-preload-152932"
	I1109 14:11:32.953028  234584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-152932
	I1109 14:11:32.971738  234584 main.go:143] libmachine: Using SSH client type: native
	I1109 14:11:32.971990  234584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1109 14:11:32.972010  234584 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-152932 && echo "no-preload-152932" | sudo tee /etc/hostname
	I1109 14:11:33.105373  234584 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-152932
	
	I1109 14:11:33.105449  234584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-152932
	I1109 14:11:33.123778  234584 main.go:143] libmachine: Using SSH client type: native
	I1109 14:11:33.124031  234584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1109 14:11:33.124051  234584 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-152932' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-152932/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-152932' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:11:33.249759  234584 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:11:33.249787  234584 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-5854/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-5854/.minikube}
	I1109 14:11:33.249819  234584 ubuntu.go:190] setting up certificates
	I1109 14:11:33.249829  234584 provision.go:84] configureAuth start
	I1109 14:11:33.249891  234584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-152932
	I1109 14:11:33.267425  234584 provision.go:143] copyHostCerts
	I1109 14:11:33.267487  234584 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem, removing ...
	I1109 14:11:33.267498  234584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem
	I1109 14:11:33.267568  234584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem (1679 bytes)
	I1109 14:11:33.267723  234584 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem, removing ...
	I1109 14:11:33.267736  234584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem
	I1109 14:11:33.267785  234584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem (1078 bytes)
	I1109 14:11:33.267871  234584 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem, removing ...
	I1109 14:11:33.267880  234584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem
	I1109 14:11:33.267920  234584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem (1123 bytes)
	I1109 14:11:33.268006  234584 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem org=jenkins.no-preload-152932 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-152932]
	I1109 14:11:33.780760  234584 provision.go:177] copyRemoteCerts
	I1109 14:11:33.780817  234584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:11:33.780854  234584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-152932
	I1109 14:11:33.801357  234584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/no-preload-152932/id_rsa Username:docker}
	I1109 14:11:33.894820  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 14:11:33.913735  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1109 14:11:33.937448  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:11:33.955369  234584 provision.go:87] duration metric: took 705.521723ms to configureAuth
	I1109 14:11:33.955392  234584 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:11:33.955529  234584 config.go:182] Loaded profile config "no-preload-152932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:11:33.955622  234584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-152932
	I1109 14:11:33.974136  234584 main.go:143] libmachine: Using SSH client type: native
	I1109 14:11:33.974389  234584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1109 14:11:33.974412  234584 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:11:34.205714  234584 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:11:34.205741  234584 machine.go:97] duration metric: took 1.395078184s to provisionDockerMachine
	I1109 14:11:34.205754  234584 client.go:176] duration metric: took 3.486955031s to LocalClient.Create
	I1109 14:11:34.205778  234584 start.go:167] duration metric: took 3.487025775s to libmachine.API.Create "no-preload-152932"
	I1109 14:11:34.205796  234584 start.go:293] postStartSetup for "no-preload-152932" (driver="docker")
	I1109 14:11:34.205809  234584 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:11:34.205869  234584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:11:34.205907  234584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-152932
	I1109 14:11:34.224453  234584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/no-preload-152932/id_rsa Username:docker}
	I1109 14:11:34.318726  234584 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:11:34.322226  234584 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:11:34.322256  234584 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:11:34.322267  234584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/addons for local assets ...
	I1109 14:11:34.322326  234584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/files for local assets ...
	I1109 14:11:34.322420  234584 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem -> 93652.pem in /etc/ssl/certs
	I1109 14:11:34.322529  234584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:11:34.330200  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:11:34.350943  234584 start.go:296] duration metric: took 145.126756ms for postStartSetup
	I1109 14:11:34.351269  234584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-152932
	I1109 14:11:34.370139  234584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/config.json ...
	I1109 14:11:34.370421  234584 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:11:34.370464  234584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-152932
	I1109 14:11:34.389267  234584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/no-preload-152932/id_rsa Username:docker}
	I1109 14:11:34.488332  234584 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:11:34.493126  234584 start.go:128] duration metric: took 3.776803457s to createHost
	I1109 14:11:34.493154  234584 start.go:83] releasing machines lock for "no-preload-152932", held for 3.776947995s
	I1109 14:11:34.493220  234584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-152932
	I1109 14:11:34.514072  234584 ssh_runner.go:195] Run: cat /version.json
	I1109 14:11:34.514125  234584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:11:34.514224  234584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-152932
	I1109 14:11:34.514128  234584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-152932
	I1109 14:11:34.533158  234584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/no-preload-152932/id_rsa Username:docker}
	I1109 14:11:34.534729  234584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/no-preload-152932/id_rsa Username:docker}
	I1109 14:11:34.626624  234584 ssh_runner.go:195] Run: systemctl --version
	I1109 14:11:34.685919  234584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:11:34.723757  234584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:11:34.728781  234584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:11:34.728852  234584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:11:34.763221  234584 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1109 14:11:34.763241  234584 start.go:496] detecting cgroup driver to use...
	I1109 14:11:34.763269  234584 detect.go:190] detected "systemd" cgroup driver on host os
	I1109 14:11:34.763304  234584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:11:34.781062  234584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:11:34.793146  234584 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:11:34.793197  234584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:11:34.808978  234584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:11:34.824694  234584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:11:34.904291  234584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:11:34.991366  234584 docker.go:234] disabling docker service ...
	I1109 14:11:34.991437  234584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:11:35.008391  234584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:11:35.020118  234584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:11:35.107365  234584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:11:35.195212  234584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:11:35.207159  234584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:11:35.222603  234584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:11:35.222688  234584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:35.232854  234584 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1109 14:11:35.232910  234584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:35.241465  234584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:35.251095  234584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:35.259767  234584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:11:35.268141  234584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:35.277087  234584 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:35.290742  234584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:11:35.299428  234584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:11:35.307549  234584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:11:35.314924  234584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:11:35.399190  234584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:11:35.524730  234584 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:11:35.524801  234584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:11:35.528711  234584 start.go:564] Will wait 60s for crictl version
	I1109 14:11:35.528766  234584 ssh_runner.go:195] Run: which crictl
	I1109 14:11:35.532440  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:11:35.558608  234584 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:11:35.558703  234584 ssh_runner.go:195] Run: crio --version
	I1109 14:11:35.591158  234584 ssh_runner.go:195] Run: crio --version
	I1109 14:11:35.621520  234584 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:11:35.194726  188127 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1109 14:11:35.195336  188127 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1109 14:11:35.195383  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1109 14:11:35.195419  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1109 14:11:35.223357  188127 cri.go:89] found id: "278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:35.223375  188127 cri.go:89] found id: ""
	I1109 14:11:35.223382  188127 logs.go:282] 1 containers: [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689]
	I1109 14:11:35.223434  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:35.227276  188127 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1109 14:11:35.227339  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1109 14:11:35.253956  188127 cri.go:89] found id: ""
	I1109 14:11:35.253976  188127 logs.go:282] 0 containers: []
	W1109 14:11:35.253985  188127 logs.go:284] No container was found matching "etcd"
	I1109 14:11:35.253992  188127 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1109 14:11:35.254042  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1109 14:11:35.279708  188127 cri.go:89] found id: ""
	I1109 14:11:35.279735  188127 logs.go:282] 0 containers: []
	W1109 14:11:35.279746  188127 logs.go:284] No container was found matching "coredns"
	I1109 14:11:35.279754  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1109 14:11:35.279806  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1109 14:11:35.305887  188127 cri.go:89] found id: "5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:35.305908  188127 cri.go:89] found id: ""
	I1109 14:11:35.305917  188127 logs.go:282] 1 containers: [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82]
	I1109 14:11:35.305962  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:35.309719  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1109 14:11:35.309782  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1109 14:11:35.335489  188127 cri.go:89] found id: ""
	I1109 14:11:35.335513  188127 logs.go:282] 0 containers: []
	W1109 14:11:35.335523  188127 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:11:35.335530  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1109 14:11:35.335574  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1109 14:11:35.368404  188127 cri.go:89] found id: "b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:35.368427  188127 cri.go:89] found id: ""
	I1109 14:11:35.368436  188127 logs.go:282] 1 containers: [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd]
	I1109 14:11:35.368498  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:35.372082  188127 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1109 14:11:35.372139  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1109 14:11:35.396475  188127 cri.go:89] found id: ""
	I1109 14:11:35.396500  188127 logs.go:282] 0 containers: []
	W1109 14:11:35.396510  188127 logs.go:284] No container was found matching "kindnet"
	I1109 14:11:35.396518  188127 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1109 14:11:35.396573  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1109 14:11:35.423872  188127 cri.go:89] found id: ""
	I1109 14:11:35.423896  188127 logs.go:282] 0 containers: []
	W1109 14:11:35.423904  188127 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:11:35.423919  188127 logs.go:123] Gathering logs for container status ...
	I1109 14:11:35.423933  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:11:35.453089  188127 logs.go:123] Gathering logs for kubelet ...
	I1109 14:11:35.453120  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:11:35.555068  188127 logs.go:123] Gathering logs for dmesg ...
	I1109 14:11:35.555092  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:11:35.569275  188127 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:11:35.569296  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:11:35.644797  188127 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:11:35.644815  188127 logs.go:123] Gathering logs for kube-apiserver [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689] ...
	I1109 14:11:35.644830  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:35.680635  188127 logs.go:123] Gathering logs for kube-scheduler [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82] ...
	I1109 14:11:35.680688  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:35.729716  188127 logs.go:123] Gathering logs for kube-controller-manager [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd] ...
	I1109 14:11:35.729742  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:35.755019  188127 logs.go:123] Gathering logs for CRI-O ...
	I1109 14:11:35.755041  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1109 14:11:38.304708  188127 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1109 14:11:38.305119  188127 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1109 14:11:38.305179  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1109 14:11:38.305269  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1109 14:11:38.336263  188127 cri.go:89] found id: "278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:38.336290  188127 cri.go:89] found id: ""
	I1109 14:11:38.336300  188127 logs.go:282] 1 containers: [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689]
	I1109 14:11:38.336352  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:38.340697  188127 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1109 14:11:38.340759  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1109 14:11:38.371539  188127 cri.go:89] found id: ""
	I1109 14:11:38.371564  188127 logs.go:282] 0 containers: []
	W1109 14:11:38.371575  188127 logs.go:284] No container was found matching "etcd"
	I1109 14:11:38.371583  188127 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1109 14:11:38.371671  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1109 14:11:38.403503  188127 cri.go:89] found id: ""
	I1109 14:11:38.403528  188127 logs.go:282] 0 containers: []
	W1109 14:11:38.403538  188127 logs.go:284] No container was found matching "coredns"
	I1109 14:11:38.403579  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1109 14:11:38.403652  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1109 14:11:38.432489  188127 cri.go:89] found id: "5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:38.432514  188127 cri.go:89] found id: ""
	I1109 14:11:38.432524  188127 logs.go:282] 1 containers: [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82]
	I1109 14:11:38.432577  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:38.436752  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1109 14:11:38.436814  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1109 14:11:38.468970  188127 cri.go:89] found id: ""
	I1109 14:11:38.469037  188127 logs.go:282] 0 containers: []
	W1109 14:11:38.469048  188127 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:11:38.469056  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1109 14:11:38.469211  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1109 14:11:38.511156  188127 cri.go:89] found id: "b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:38.511223  188127 cri.go:89] found id: ""
	I1109 14:11:38.511235  188127 logs.go:282] 1 containers: [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd]
	I1109 14:11:38.511298  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:38.516181  188127 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1109 14:11:38.516240  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1109 14:11:38.548974  188127 cri.go:89] found id: ""
	I1109 14:11:38.549001  188127 logs.go:282] 0 containers: []
	W1109 14:11:38.549014  188127 logs.go:284] No container was found matching "kindnet"
	I1109 14:11:38.549022  188127 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1109 14:11:38.549076  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1109 14:11:38.579821  188127 cri.go:89] found id: ""
	I1109 14:11:38.579848  188127 logs.go:282] 0 containers: []
	W1109 14:11:38.579858  188127 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:11:38.579869  188127 logs.go:123] Gathering logs for container status ...
	I1109 14:11:38.579881  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:11:38.613477  188127 logs.go:123] Gathering logs for kubelet ...
	I1109 14:11:38.613514  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:11:38.722661  188127 logs.go:123] Gathering logs for dmesg ...
	I1109 14:11:38.722697  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:11:38.738845  188127 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:11:38.738871  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:11:38.803503  188127 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:11:38.803528  188127 logs.go:123] Gathering logs for kube-apiserver [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689] ...
	I1109 14:11:38.803544  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:38.845501  188127 logs.go:123] Gathering logs for kube-scheduler [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82] ...
	I1109 14:11:38.845533  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:38.901989  188127 logs.go:123] Gathering logs for kube-controller-manager [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd] ...
	I1109 14:11:38.902026  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:38.930512  188127 logs.go:123] Gathering logs for CRI-O ...
	I1109 14:11:38.930538  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1109 14:11:34.493474  228825 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:11:34.493515  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:34.493570  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-169816 minikube.k8s.io/updated_at=2025_11_09T14_11_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=old-k8s-version-169816 minikube.k8s.io/primary=true
	I1109 14:11:34.504773  228825 ops.go:34] apiserver oom_adj: -16
	I1109 14:11:34.570804  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:35.071820  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:35.571411  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:36.072071  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:36.571093  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:37.070967  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:37.571461  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:38.071479  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:38.571472  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:39.071560  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:35.622706  234584 cli_runner.go:164] Run: docker network inspect no-preload-152932 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:11:35.643921  234584 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1109 14:11:35.648565  234584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:11:35.659445  234584 kubeadm.go:884] updating cluster {Name:no-preload-152932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-152932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:11:35.659568  234584 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:11:35.659609  234584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:11:35.686817  234584 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1109 14:11:35.686841  234584 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1109 14:11:35.686906  234584 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:11:35.686912  234584 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1109 14:11:35.686939  234584 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1109 14:11:35.686962  234584 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1109 14:11:35.686972  234584 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1109 14:11:35.686984  234584 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1109 14:11:35.686968  234584 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1109 14:11:35.687141  234584 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1109 14:11:35.688187  234584 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1109 14:11:35.688233  234584 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1109 14:11:35.688266  234584 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1109 14:11:35.688302  234584 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1109 14:11:35.688357  234584 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1109 14:11:35.688356  234584 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:11:35.688374  234584 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1109 14:11:35.688766  234584 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1109 14:11:35.812532  234584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1109 14:11:35.815256  234584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1109 14:11:35.818338  234584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1109 14:11:35.823828  234584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1109 14:11:35.826497  234584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1109 14:11:35.845526  234584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1109 14:11:35.853392  234584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1109 14:11:35.853605  234584 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1109 14:11:35.853667  234584 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1109 14:11:35.853742  234584 ssh_runner.go:195] Run: which crictl
	I1109 14:11:35.854894  234584 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1109 14:11:35.854934  234584 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1109 14:11:35.854971  234584 ssh_runner.go:195] Run: which crictl
	I1109 14:11:35.856903  234584 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1109 14:11:35.856945  234584 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1109 14:11:35.856979  234584 ssh_runner.go:195] Run: which crictl
	I1109 14:11:35.868115  234584 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1109 14:11:35.868152  234584 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1109 14:11:35.868190  234584 ssh_runner.go:195] Run: which crictl
	I1109 14:11:35.868346  234584 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1109 14:11:35.868376  234584 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1109 14:11:35.868421  234584 ssh_runner.go:195] Run: which crictl
	I1109 14:11:35.883286  234584 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1109 14:11:35.883318  234584 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1109 14:11:35.883357  234584 ssh_runner.go:195] Run: which crictl
	I1109 14:11:35.889446  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1109 14:11:35.889465  234584 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1109 14:11:35.889486  234584 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1109 14:11:35.889516  234584 ssh_runner.go:195] Run: which crictl
	I1109 14:11:35.889545  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1109 14:11:35.889558  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1109 14:11:35.889620  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1109 14:11:35.889720  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1109 14:11:35.889720  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1109 14:11:35.894607  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1109 14:11:35.925971  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1109 14:11:35.926772  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1109 14:11:35.951979  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1109 14:11:35.951991  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1109 14:11:35.952042  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1109 14:11:35.952085  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1109 14:11:35.952124  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1109 14:11:35.952143  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1109 14:11:35.952191  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1109 14:11:35.987374  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1109 14:11:35.987612  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1109 14:11:36.012345  234584 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1109 14:11:36.012440  234584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1109 14:11:36.012455  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1109 14:11:36.012464  234584 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1109 14:11:36.012508  234584 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1109 14:11:36.012519  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1109 14:11:36.012562  234584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1109 14:11:36.012578  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1109 14:11:36.012579  234584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1109 14:11:36.012624  234584 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1109 14:11:36.012707  234584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1109 14:11:36.018319  234584 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1109 14:11:36.018346  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1109 14:11:36.018430  234584 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1109 14:11:36.018448  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1109 14:11:36.059269  234584 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1109 14:11:36.059260  234584 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1109 14:11:36.059381  234584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1109 14:11:36.059376  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1109 14:11:36.065714  234584 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1109 14:11:36.065723  234584 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1109 14:11:36.065770  234584 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1109 14:11:36.065800  234584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1109 14:11:36.065805  234584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1109 14:11:36.065804  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1109 14:11:36.161731  234584 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1109 14:11:36.161765  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1109 14:11:36.169598  234584 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1109 14:11:36.169627  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1109 14:11:36.169654  234584 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1109 14:11:36.169682  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1109 14:11:36.188341  234584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:11:36.243826  234584 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1109 14:11:36.243982  234584 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1109 14:11:36.263469  234584 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1109 14:11:36.263507  234584 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:11:36.263555  234584 ssh_runner.go:195] Run: which crictl
	I1109 14:11:36.829801  234584 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1109 14:11:36.829841  234584 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1109 14:11:36.829887  234584 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1109 14:11:36.829901  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:11:37.834441  234584 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.004526493s)
	I1109 14:11:37.834474  234584 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1109 14:11:37.834503  234584 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1109 14:11:37.834509  234584 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.00458255s)
	I1109 14:11:37.834555  234584 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1109 14:11:37.834575  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:11:39.114241  234584 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.279657482s)
	I1109 14:11:39.114273  234584 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1109 14:11:39.114276  234584 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.279686299s)
	I1109 14:11:39.114313  234584 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1109 14:11:39.114333  234584 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:11:39.114369  234584 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1109 14:11:40.166310  234584 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.051916592s)
	I1109 14:11:40.166337  234584 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1109 14:11:40.166367  234584 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1109 14:11:40.166381  234584 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.052025653s)
	I1109 14:11:40.166413  234584 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1109 14:11:40.166421  234584 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1109 14:11:40.166487  234584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1109 14:11:41.487260  188127 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1109 14:11:41.487710  188127 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1109 14:11:41.487772  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1109 14:11:41.487832  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1109 14:11:41.515713  188127 cri.go:89] found id: "278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:41.515753  188127 cri.go:89] found id: ""
	I1109 14:11:41.515769  188127 logs.go:282] 1 containers: [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689]
	I1109 14:11:41.515823  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:41.519772  188127 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1109 14:11:41.519834  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1109 14:11:41.547407  188127 cri.go:89] found id: ""
	I1109 14:11:41.547429  188127 logs.go:282] 0 containers: []
	W1109 14:11:41.547443  188127 logs.go:284] No container was found matching "etcd"
	I1109 14:11:41.547451  188127 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1109 14:11:41.547501  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1109 14:11:41.573150  188127 cri.go:89] found id: ""
	I1109 14:11:41.573175  188127 logs.go:282] 0 containers: []
	W1109 14:11:41.573185  188127 logs.go:284] No container was found matching "coredns"
	I1109 14:11:41.573196  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1109 14:11:41.573254  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1109 14:11:41.598617  188127 cri.go:89] found id: "5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:41.598635  188127 cri.go:89] found id: ""
	I1109 14:11:41.598670  188127 logs.go:282] 1 containers: [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82]
	I1109 14:11:41.598721  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:41.602508  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1109 14:11:41.602561  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1109 14:11:41.627815  188127 cri.go:89] found id: ""
	I1109 14:11:41.627852  188127 logs.go:282] 0 containers: []
	W1109 14:11:41.627863  188127 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:11:41.627879  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1109 14:11:41.627932  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1109 14:11:41.658699  188127 cri.go:89] found id: "b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:41.658717  188127 cri.go:89] found id: ""
	I1109 14:11:41.658724  188127 logs.go:282] 1 containers: [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd]
	I1109 14:11:41.658777  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:41.662508  188127 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1109 14:11:41.662557  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1109 14:11:41.687142  188127 cri.go:89] found id: ""
	I1109 14:11:41.687166  188127 logs.go:282] 0 containers: []
	W1109 14:11:41.687174  188127 logs.go:284] No container was found matching "kindnet"
	I1109 14:11:41.687181  188127 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1109 14:11:41.687234  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1109 14:11:41.712656  188127 cri.go:89] found id: ""
	I1109 14:11:41.712683  188127 logs.go:282] 0 containers: []
	W1109 14:11:41.712693  188127 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:11:41.712705  188127 logs.go:123] Gathering logs for kubelet ...
	I1109 14:11:41.712718  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:11:41.807580  188127 logs.go:123] Gathering logs for dmesg ...
	I1109 14:11:41.807609  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:11:41.821819  188127 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:11:41.821848  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 14:11:41.874827  188127 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 14:11:41.874848  188127 logs.go:123] Gathering logs for kube-apiserver [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689] ...
	I1109 14:11:41.874871  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:41.907848  188127 logs.go:123] Gathering logs for kube-scheduler [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82] ...
	I1109 14:11:41.907875  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:41.964380  188127 logs.go:123] Gathering logs for kube-controller-manager [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd] ...
	I1109 14:11:41.964412  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:41.992226  188127 logs.go:123] Gathering logs for CRI-O ...
	I1109 14:11:41.992252  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1109 14:11:42.042202  188127 logs.go:123] Gathering logs for container status ...
	I1109 14:11:42.042232  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:11:39.571778  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:40.071317  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:40.571077  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:41.071170  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:41.571839  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:42.071863  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:42.571461  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:43.071404  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:43.571895  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:44.071518  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:41.447343  234584 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.280902209s)
	I1109 14:11:41.447367  234584 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1109 14:11:41.447372  234584 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.280861547s)
	I1109 14:11:41.447383  234584 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1109 14:11:41.447399  234584 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1109 14:11:41.447422  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1109 14:11:41.447430  234584 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1109 14:11:42.676960  234584 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.229498483s)
	I1109 14:11:42.676988  234584 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1109 14:11:42.677018  234584 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1109 14:11:42.677062  234584 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1109 14:11:44.571238  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:45.071981  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:45.571539  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:46.071800  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:46.571577  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:47.071414  228825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:47.152018  228825 kubeadm.go:1114] duration metric: took 12.658545852s to wait for elevateKubeSystemPrivileges
	I1109 14:11:47.152052  228825 kubeadm.go:403] duration metric: took 23.361721588s to StartCluster
	I1109 14:11:47.152073  228825 settings.go:142] acquiring lock: {Name:mk4e77b290eba64589404bd2a5f48c72505ab262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:47.152151  228825 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:11:47.153555  228825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:47.155769  228825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 14:11:47.155808  228825 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:11:47.155858  228825 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:11:47.155978  228825 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-169816"
	I1109 14:11:47.155998  228825 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-169816"
	I1109 14:11:47.156028  228825 host.go:66] Checking if "old-k8s-version-169816" exists ...
	I1109 14:11:47.156052  228825 config.go:182] Loaded profile config "old-k8s-version-169816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1109 14:11:47.156104  228825 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-169816"
	I1109 14:11:47.156126  228825 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-169816"
	I1109 14:11:47.156465  228825 cli_runner.go:164] Run: docker container inspect old-k8s-version-169816 --format={{.State.Status}}
	I1109 14:11:47.156653  228825 cli_runner.go:164] Run: docker container inspect old-k8s-version-169816 --format={{.State.Status}}
	I1109 14:11:47.192232  228825 out.go:179] * Verifying Kubernetes components...
	I1109 14:11:47.192469  228825 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-169816"
	I1109 14:11:47.192779  228825 host.go:66] Checking if "old-k8s-version-169816" exists ...
	I1109 14:11:47.193006  228825 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:11:47.193277  228825 cli_runner.go:164] Run: docker container inspect old-k8s-version-169816 --format={{.State.Status}}
	I1109 14:11:47.194152  228825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:11:47.194195  228825 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:11:47.194217  228825 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:11:47.194271  228825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:11:47.215543  228825 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:11:47.215597  228825 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:11:47.215973  228825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:11:47.220387  228825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/old-k8s-version-169816/id_rsa Username:docker}
	I1109 14:11:47.251743  228825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/old-k8s-version-169816/id_rsa Username:docker}
	I1109 14:11:47.256561  228825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 14:11:47.337474  228825 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:11:47.352459  228825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:11:47.365493  228825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:11:47.499912  228825 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1109 14:11:47.501134  228825 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-169816" to be "Ready" ...
	I1109 14:11:47.805570  228825 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1109 14:11:44.575599  188127 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1109 14:11:44.576045  188127 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1109 14:11:44.576101  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1109 14:11:44.576156  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1109 14:11:44.608435  188127 cri.go:89] found id: "278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:44.608453  188127 cri.go:89] found id: ""
	I1109 14:11:44.608460  188127 logs.go:282] 1 containers: [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689]
	I1109 14:11:44.608504  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:44.612831  188127 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1109 14:11:44.612893  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1109 14:11:44.641732  188127 cri.go:89] found id: ""
	I1109 14:11:44.641756  188127 logs.go:282] 0 containers: []
	W1109 14:11:44.641767  188127 logs.go:284] No container was found matching "etcd"
	I1109 14:11:44.641774  188127 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1109 14:11:44.641823  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1109 14:11:44.672898  188127 cri.go:89] found id: ""
	I1109 14:11:44.672921  188127 logs.go:282] 0 containers: []
	W1109 14:11:44.672930  188127 logs.go:284] No container was found matching "coredns"
	I1109 14:11:44.672938  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1109 14:11:44.672991  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1109 14:11:44.699930  188127 cri.go:89] found id: "5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:44.699952  188127 cri.go:89] found id: ""
	I1109 14:11:44.699961  188127 logs.go:282] 1 containers: [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82]
	I1109 14:11:44.700017  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:44.704451  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1109 14:11:44.704513  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1109 14:11:44.733143  188127 cri.go:89] found id: ""
	I1109 14:11:44.733166  188127 logs.go:282] 0 containers: []
	W1109 14:11:44.733176  188127 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:11:44.733188  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1109 14:11:44.733243  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1109 14:11:44.764131  188127 cri.go:89] found id: "b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:44.764154  188127 cri.go:89] found id: ""
	I1109 14:11:44.764167  188127 logs.go:282] 1 containers: [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd]
	I1109 14:11:44.764220  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:11:44.768064  188127 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1109 14:11:44.768131  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1109 14:11:44.797706  188127 cri.go:89] found id: ""
	I1109 14:11:44.797731  188127 logs.go:282] 0 containers: []
	W1109 14:11:44.797742  188127 logs.go:284] No container was found matching "kindnet"
	I1109 14:11:44.797749  188127 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1109 14:11:44.797803  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1109 14:11:44.826771  188127 cri.go:89] found id: ""
	I1109 14:11:44.826799  188127 logs.go:282] 0 containers: []
	W1109 14:11:44.826809  188127 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:11:44.826820  188127 logs.go:123] Gathering logs for CRI-O ...
	I1109 14:11:44.826832  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1109 14:11:44.877579  188127 logs.go:123] Gathering logs for container status ...
	I1109 14:11:44.877609  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:11:44.908448  188127 logs.go:123] Gathering logs for kubelet ...
	I1109 14:11:44.908479  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:11:45.035865  188127 logs.go:123] Gathering logs for dmesg ...
	I1109 14:11:45.035902  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:11:45.052885  188127 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:11:45.052921  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1109 14:11:46.043334  234584 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.366247326s)
	I1109 14:11:46.043360  234584 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1109 14:11:46.043381  234584 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1109 14:11:46.043416  234584 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1109 14:11:46.589055  234584 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21139-5854/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1109 14:11:46.589096  234584 cache_images.go:125] Successfully loaded all cached images
	I1109 14:11:46.589101  234584 cache_images.go:94] duration metric: took 10.902246565s to LoadCachedImages
	I1109 14:11:46.589112  234584 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1109 14:11:46.589202  234584 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-152932 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-152932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:11:46.589261  234584 ssh_runner.go:195] Run: crio config
	I1109 14:11:46.639207  234584 cni.go:84] Creating CNI manager for ""
	I1109 14:11:46.639235  234584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:11:46.639250  234584 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:11:46.639271  234584 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-152932 NodeName:no-preload-152932 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:11:46.639396  234584 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-152932"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:11:46.639457  234584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:11:46.648766  234584 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1109 14:11:46.648818  234584 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1109 14:11:46.656750  234584 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1109 14:11:46.656823  234584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1109 14:11:46.656848  234584 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21139-5854/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1109 14:11:46.656872  234584 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21139-5854/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1109 14:11:46.660983  234584 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1109 14:11:46.661007  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1109 14:11:47.512938  234584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:11:47.531961  234584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1109 14:11:47.537076  234584 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1109 14:11:47.537104  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1109 14:11:47.605848  234584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1109 14:11:47.618285  234584 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1109 14:11:47.618319  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1109 14:11:47.875795  234584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:11:47.884391  234584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1109 14:11:47.896659  234584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:11:47.911062  234584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1109 14:11:47.923922  234584 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:11:47.928011  234584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:11:47.938072  234584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:11:48.028438  234584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:11:48.052369  234584 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932 for IP: 192.168.103.2
	I1109 14:11:48.052393  234584 certs.go:195] generating shared ca certs ...
	I1109 14:11:48.052415  234584 certs.go:227] acquiring lock for ca certs: {Name:mk1fbc7fb5aaf0e87090d75145f91c095ef07289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:48.052577  234584 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key
	I1109 14:11:48.052671  234584 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key
	I1109 14:11:48.052692  234584 certs.go:257] generating profile certs ...
	I1109 14:11:48.052765  234584 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/client.key
	I1109 14:11:48.052779  234584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/client.crt with IP's: []
	I1109 14:11:48.121344  234584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/client.crt ...
	I1109 14:11:48.121368  234584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/client.crt: {Name:mkd264edba54b149fb562434c2c4233a9590c390 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:48.121513  234584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/client.key ...
	I1109 14:11:48.121526  234584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/client.key: {Name:mk0e883a03aae85e51e934a8b4e9f4b099430b80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:48.121596  234584 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/apiserver.key.9a768455
	I1109 14:11:48.121621  234584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/apiserver.crt.9a768455 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1109 14:11:48.309774  234584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/apiserver.crt.9a768455 ...
	I1109 14:11:48.309801  234584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/apiserver.crt.9a768455: {Name:mke3b41b990f379165b36c42d6822df0a64bacb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:48.309945  234584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/apiserver.key.9a768455 ...
	I1109 14:11:48.309958  234584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/apiserver.key.9a768455: {Name:mk1bb0ced0bb3d8c41f0807a7ee680ce00cd49a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:48.310029  234584 certs.go:382] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/apiserver.crt.9a768455 -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/apiserver.crt
	I1109 14:11:48.310101  234584 certs.go:386] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/apiserver.key.9a768455 -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/apiserver.key
	I1109 14:11:48.310156  234584 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/proxy-client.key
	I1109 14:11:48.310174  234584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/proxy-client.crt with IP's: []
	I1109 14:11:48.538148  234584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/proxy-client.crt ...
	I1109 14:11:48.538173  234584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/proxy-client.crt: {Name:mk2f25174c21f9ee6a2c46408e9529da717154e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:48.538345  234584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/proxy-client.key ...
	I1109 14:11:48.538362  234584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/proxy-client.key: {Name:mk51b248fcb7d1cabc1fc6df64bb8618526b208b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:11:48.538585  234584 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem (1338 bytes)
	W1109 14:11:48.538627  234584 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365_empty.pem, impossibly tiny 0 bytes
	I1109 14:11:48.538656  234584 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:11:48.538698  234584 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 14:11:48.538735  234584 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:11:48.538767  234584 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 14:11:48.538821  234584 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:11:48.539401  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:11:48.557776  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:11:48.575007  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:11:48.592157  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:11:48.610069  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1109 14:11:48.626983  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1109 14:11:48.644337  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:11:48.661137  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:11:48.680209  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:11:48.704923  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem --> /usr/share/ca-certificates/9365.pem (1338 bytes)
	I1109 14:11:48.728224  234584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /usr/share/ca-certificates/93652.pem (1708 bytes)
	I1109 14:11:48.755977  234584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:11:48.775042  234584 ssh_runner.go:195] Run: openssl version
	I1109 14:11:48.783892  234584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93652.pem && ln -fs /usr/share/ca-certificates/93652.pem /etc/ssl/certs/93652.pem"
	I1109 14:11:48.795864  234584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93652.pem
	I1109 14:11:48.801359  234584 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:34 /usr/share/ca-certificates/93652.pem
	I1109 14:11:48.801410  234584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93652.pem
	I1109 14:11:48.860415  234584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93652.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:11:48.872997  234584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:11:48.884838  234584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:11:48.890182  234584 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:11:48.890237  234584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:11:48.945080  234584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:11:48.957157  234584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9365.pem && ln -fs /usr/share/ca-certificates/9365.pem /etc/ssl/certs/9365.pem"
	I1109 14:11:48.969463  234584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9365.pem
	I1109 14:11:48.974966  234584 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:34 /usr/share/ca-certificates/9365.pem
	I1109 14:11:48.975040  234584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9365.pem
	I1109 14:11:49.033939  234584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9365.pem /etc/ssl/certs/51391683.0"
	I1109 14:11:49.047148  234584 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:11:49.052457  234584 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:11:49.052522  234584 kubeadm.go:401] StartCluster: {Name:no-preload-152932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-152932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:11:49.052603  234584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:11:49.052667  234584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:11:49.091660  234584 cri.go:89] found id: ""
	I1109 14:11:49.091740  234584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:11:49.105791  234584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:11:49.116950  234584 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:11:49.117006  234584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:11:49.127551  234584 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:11:49.127573  234584 kubeadm.go:158] found existing configuration files:
	
	I1109 14:11:49.127625  234584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 14:11:49.139503  234584 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:11:49.139559  234584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:11:49.148054  234584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 14:11:49.158782  234584 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:11:49.158841  234584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:11:49.168742  234584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 14:11:49.178980  234584 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:11:49.179029  234584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:11:49.190248  234584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 14:11:49.200083  234584 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:11:49.200141  234584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:11:49.209559  234584 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:11:49.250139  234584 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 14:11:49.250247  234584 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:11:49.275092  234584 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 14:11:49.275187  234584 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1109 14:11:49.275244  234584 kubeadm.go:319] OS: Linux
	I1109 14:11:49.275313  234584 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 14:11:49.275419  234584 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 14:11:49.275542  234584 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 14:11:49.275631  234584 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 14:11:49.275758  234584 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 14:11:49.275843  234584 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 14:11:49.275932  234584 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 14:11:49.275991  234584 kubeadm.go:319] CGROUPS_IO: enabled
	I1109 14:11:49.351584  234584 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:11:49.351754  234584 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:11:49.351938  234584 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 14:11:49.368068  234584 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:11:47.806567  228825 addons.go:515] duration metric: took 650.702714ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1109 14:11:48.004077  228825 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-169816" context rescaled to 1 replicas
	I1109 14:11:49.370764  234584 out.go:252]   - Generating certificates and keys ...
	I1109 14:11:49.370878  234584 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:11:49.370984  234584 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:11:49.960463  234584 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 14:11:50.180146  234584 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 14:11:50.298504  234584 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 14:11:50.648274  234584 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 14:11:50.868357  234584 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 14:11:50.868541  234584 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-152932] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1109 14:11:51.199350  234584 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 14:11:51.199492  234584 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-152932] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1109 14:11:51.436726  234584 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 14:11:51.629766  234584 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 14:11:51.706041  234584 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 14:11:51.706122  234584 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:11:52.000967  234584 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:11:52.442104  234584 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 14:11:52.778396  234584 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:11:53.227833  234584 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:11:53.478793  234584 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:11:53.479395  234584 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:11:53.510577  234584 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1109 14:11:49.505250  228825 node_ready.go:57] node "old-k8s-version-169816" has "Ready":"False" status (will retry)
	W1109 14:11:52.004571  228825 node_ready.go:57] node "old-k8s-version-169816" has "Ready":"False" status (will retry)
	I1109 14:11:53.512689  234584 out.go:252]   - Booting up control plane ...
	I1109 14:11:53.512844  234584 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:11:53.512966  234584 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:11:53.513614  234584 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:11:53.526779  234584 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:11:53.526933  234584 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 14:11:53.533020  234584 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 14:11:53.533278  234584 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:11:53.533344  234584 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:11:53.623474  234584 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 14:11:53.623603  234584 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 14:11:54.624152  234584 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000729311s
	I1109 14:11:54.626902  234584 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 14:11:54.627057  234584 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1109 14:11:54.627189  234584 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 14:11:54.627300  234584 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 14:11:55.844922  234584 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.217896596s
	I1109 14:11:56.449876  234584 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.822928297s
	I1109 14:11:58.128205  234584 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501201976s
	I1109 14:11:58.140478  234584 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 14:11:58.149671  234584 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 14:11:58.157278  234584 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 14:11:58.157559  234584 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-152932 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 14:11:58.164707  234584 kubeadm.go:319] [bootstrap-token] Using token: qbggc9.1d7fuwjnbuoxpfc7
	I1109 14:11:55.129332  188127 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.076386742s)
	W1109 14:11:55.129373  188127 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1109 14:11:55.129382  188127 logs.go:123] Gathering logs for kube-apiserver [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689] ...
	I1109 14:11:55.129396  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:11:55.170254  188127 logs.go:123] Gathering logs for kube-scheduler [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82] ...
	I1109 14:11:55.170288  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:11:55.239240  188127 logs.go:123] Gathering logs for kube-controller-manager [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd] ...
	I1109 14:11:55.239276  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:11:57.769209  188127 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	W1109 14:11:54.504323  228825 node_ready.go:57] node "old-k8s-version-169816" has "Ready":"False" status (will retry)
	W1109 14:11:56.505584  228825 node_ready.go:57] node "old-k8s-version-169816" has "Ready":"False" status (will retry)
	W1109 14:11:59.004781  228825 node_ready.go:57] node "old-k8s-version-169816" has "Ready":"False" status (will retry)
	I1109 14:11:58.165830  234584 out.go:252]   - Configuring RBAC rules ...
	I1109 14:11:58.165975  234584 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 14:11:58.170347  234584 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 14:11:58.174764  234584 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 14:11:58.177117  234584 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 14:11:58.179355  234584 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 14:11:58.181462  234584 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 14:11:58.533729  234584 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 14:11:58.958507  234584 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 14:11:59.533787  234584 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 14:11:59.534797  234584 kubeadm.go:319] 
	I1109 14:11:59.534859  234584 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 14:11:59.534883  234584 kubeadm.go:319] 
	I1109 14:11:59.534979  234584 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 14:11:59.534992  234584 kubeadm.go:319] 
	I1109 14:11:59.535018  234584 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 14:11:59.535149  234584 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 14:11:59.535241  234584 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 14:11:59.535251  234584 kubeadm.go:319] 
	I1109 14:11:59.535329  234584 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 14:11:59.535357  234584 kubeadm.go:319] 
	I1109 14:11:59.535437  234584 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 14:11:59.535446  234584 kubeadm.go:319] 
	I1109 14:11:59.535540  234584 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 14:11:59.535685  234584 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 14:11:59.535779  234584 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 14:11:59.535788  234584 kubeadm.go:319] 
	I1109 14:11:59.535915  234584 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 14:11:59.536027  234584 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 14:11:59.536038  234584 kubeadm.go:319] 
	I1109 14:11:59.536150  234584 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qbggc9.1d7fuwjnbuoxpfc7 \
	I1109 14:11:59.536317  234584 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f \
	I1109 14:11:59.536355  234584 kubeadm.go:319] 	--control-plane 
	I1109 14:11:59.536371  234584 kubeadm.go:319] 
	I1109 14:11:59.536489  234584 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 14:11:59.536498  234584 kubeadm.go:319] 
	I1109 14:11:59.536607  234584 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qbggc9.1d7fuwjnbuoxpfc7 \
	I1109 14:11:59.536765  234584 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f 
	I1109 14:11:59.538752  234584 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1109 14:11:59.538948  234584 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 14:11:59.538986  234584 cni.go:84] Creating CNI manager for ""
	I1109 14:11:59.538999  234584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:11:59.540565  234584 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1109 14:11:59.541569  234584 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 14:11:59.546338  234584 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 14:11:59.546352  234584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1109 14:11:59.559315  234584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 14:11:59.758009  234584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:11:59.758084  234584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:11:59.758115  234584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-152932 minikube.k8s.io/updated_at=2025_11_09T14_11_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=no-preload-152932 minikube.k8s.io/primary=true
	I1109 14:11:59.768740  234584 ops.go:34] apiserver oom_adj: -16
	I1109 14:11:59.842519  234584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:12:00.342860  234584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:12:00.503962  228825 node_ready.go:49] node "old-k8s-version-169816" is "Ready"
	I1109 14:12:00.503996  228825 node_ready.go:38] duration metric: took 13.002826833s for node "old-k8s-version-169816" to be "Ready" ...
	I1109 14:12:00.504013  228825 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:12:00.504069  228825 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:12:00.516727  228825 api_server.go:72] duration metric: took 13.3608819s to wait for apiserver process to appear ...
	I1109 14:12:00.516749  228825 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:12:00.516769  228825 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:12:00.520718  228825 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1109 14:12:00.521766  228825 api_server.go:141] control plane version: v1.28.0
	I1109 14:12:00.521791  228825 api_server.go:131] duration metric: took 5.036492ms to wait for apiserver health ...
	I1109 14:12:00.521799  228825 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:12:00.524997  228825 system_pods.go:59] 8 kube-system pods found
	I1109 14:12:00.525033  228825 system_pods.go:61] "coredns-5dd5756b68-5bgfs" [902d0a81-0e13-4f1b-a2cb-b00a65fcf0cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:12:00.525042  228825 system_pods.go:61] "etcd-old-k8s-version-169816" [e5e8ea55-ad4e-47ae-9667-ff952abbed41] Running
	I1109 14:12:00.525054  228825 system_pods.go:61] "kindnet-mjzvm" [e7039a8f-f22c-4c68-b80d-ae4be93c9336] Running
	I1109 14:12:00.525060  228825 system_pods.go:61] "kube-apiserver-old-k8s-version-169816" [7f015dac-49e5-4b2a-a274-6d6f1eab8d4e] Running
	I1109 14:12:00.525068  228825 system_pods.go:61] "kube-controller-manager-old-k8s-version-169816" [af76f5d9-9201-489f-8c72-da3c1a21d073] Running
	I1109 14:12:00.525072  228825 system_pods.go:61] "kube-proxy-96xbm" [75cd36ca-24dd-42c7-832f-af68236aa60b] Running
	I1109 14:12:00.525077  228825 system_pods.go:61] "kube-scheduler-old-k8s-version-169816" [49f73b14-886b-428c-bfcc-e0374b13fc1d] Running
	I1109 14:12:00.525085  228825 system_pods.go:61] "storage-provisioner" [3a49fdf1-199e-4d41-978b-a0fb1b33155b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:12:00.525093  228825 system_pods.go:74] duration metric: took 3.28694ms to wait for pod list to return data ...
	I1109 14:12:00.525105  228825 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:12:00.526999  228825 default_sa.go:45] found service account: "default"
	I1109 14:12:00.527015  228825 default_sa.go:55] duration metric: took 1.90455ms for default service account to be created ...
	I1109 14:12:00.527021  228825 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:12:00.529832  228825 system_pods.go:86] 8 kube-system pods found
	I1109 14:12:00.529862  228825 system_pods.go:89] "coredns-5dd5756b68-5bgfs" [902d0a81-0e13-4f1b-a2cb-b00a65fcf0cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:12:00.529868  228825 system_pods.go:89] "etcd-old-k8s-version-169816" [e5e8ea55-ad4e-47ae-9667-ff952abbed41] Running
	I1109 14:12:00.529875  228825 system_pods.go:89] "kindnet-mjzvm" [e7039a8f-f22c-4c68-b80d-ae4be93c9336] Running
	I1109 14:12:00.529885  228825 system_pods.go:89] "kube-apiserver-old-k8s-version-169816" [7f015dac-49e5-4b2a-a274-6d6f1eab8d4e] Running
	I1109 14:12:00.529894  228825 system_pods.go:89] "kube-controller-manager-old-k8s-version-169816" [af76f5d9-9201-489f-8c72-da3c1a21d073] Running
	I1109 14:12:00.529901  228825 system_pods.go:89] "kube-proxy-96xbm" [75cd36ca-24dd-42c7-832f-af68236aa60b] Running
	I1109 14:12:00.529906  228825 system_pods.go:89] "kube-scheduler-old-k8s-version-169816" [49f73b14-886b-428c-bfcc-e0374b13fc1d] Running
	I1109 14:12:00.529916  228825 system_pods.go:89] "storage-provisioner" [3a49fdf1-199e-4d41-978b-a0fb1b33155b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:12:00.529955  228825 retry.go:31] will retry after 304.891709ms: missing components: kube-dns
	I1109 14:12:00.841454  228825 system_pods.go:86] 8 kube-system pods found
	I1109 14:12:00.841493  228825 system_pods.go:89] "coredns-5dd5756b68-5bgfs" [902d0a81-0e13-4f1b-a2cb-b00a65fcf0cf] Running
	I1109 14:12:00.841503  228825 system_pods.go:89] "etcd-old-k8s-version-169816" [e5e8ea55-ad4e-47ae-9667-ff952abbed41] Running
	I1109 14:12:00.841508  228825 system_pods.go:89] "kindnet-mjzvm" [e7039a8f-f22c-4c68-b80d-ae4be93c9336] Running
	I1109 14:12:00.841513  228825 system_pods.go:89] "kube-apiserver-old-k8s-version-169816" [7f015dac-49e5-4b2a-a274-6d6f1eab8d4e] Running
	I1109 14:12:00.841520  228825 system_pods.go:89] "kube-controller-manager-old-k8s-version-169816" [af76f5d9-9201-489f-8c72-da3c1a21d073] Running
	I1109 14:12:00.841525  228825 system_pods.go:89] "kube-proxy-96xbm" [75cd36ca-24dd-42c7-832f-af68236aa60b] Running
	I1109 14:12:00.841530  228825 system_pods.go:89] "kube-scheduler-old-k8s-version-169816" [49f73b14-886b-428c-bfcc-e0374b13fc1d] Running
	I1109 14:12:00.841535  228825 system_pods.go:89] "storage-provisioner" [3a49fdf1-199e-4d41-978b-a0fb1b33155b] Running
	I1109 14:12:00.841544  228825 system_pods.go:126] duration metric: took 314.517027ms to wait for k8s-apps to be running ...
	I1109 14:12:00.841553  228825 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:12:00.841607  228825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:12:00.862903  228825 system_svc.go:56] duration metric: took 21.323576ms WaitForService to wait for kubelet
	I1109 14:12:00.862934  228825 kubeadm.go:587] duration metric: took 13.707092465s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:12:00.862954  228825 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:12:00.865324  228825 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:12:00.865348  228825 node_conditions.go:123] node cpu capacity is 8
	I1109 14:12:00.865366  228825 node_conditions.go:105] duration metric: took 2.406699ms to run NodePressure ...
	I1109 14:12:00.865381  228825 start.go:242] waiting for startup goroutines ...
	I1109 14:12:00.865394  228825 start.go:247] waiting for cluster config update ...
	I1109 14:12:00.865411  228825 start.go:256] writing updated cluster config ...
	I1109 14:12:00.865715  228825 ssh_runner.go:195] Run: rm -f paused
	I1109 14:12:00.869977  228825 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:12:00.873722  228825 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-5bgfs" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:12:00.877564  228825 pod_ready.go:94] pod "coredns-5dd5756b68-5bgfs" is "Ready"
	I1109 14:12:00.877586  228825 pod_ready.go:86] duration metric: took 3.843338ms for pod "coredns-5dd5756b68-5bgfs" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:12:00.880246  228825 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:12:00.884114  228825 pod_ready.go:94] pod "etcd-old-k8s-version-169816" is "Ready"
	I1109 14:12:00.884140  228825 pod_ready.go:86] duration metric: took 3.875254ms for pod "etcd-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:12:00.886587  228825 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:12:00.890601  228825 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-169816" is "Ready"
	I1109 14:12:00.890622  228825 pod_ready.go:86] duration metric: took 4.016995ms for pod "kube-apiserver-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:12:00.892938  228825 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:12:01.273458  228825 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-169816" is "Ready"
	I1109 14:12:01.273483  228825 pod_ready.go:86] duration metric: took 380.525192ms for pod "kube-controller-manager-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:12:01.474929  228825 pod_ready.go:83] waiting for pod "kube-proxy-96xbm" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:12:01.873455  228825 pod_ready.go:94] pod "kube-proxy-96xbm" is "Ready"
	I1109 14:12:01.873480  228825 pod_ready.go:86] duration metric: took 398.527068ms for pod "kube-proxy-96xbm" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:12:02.075136  228825 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:12:02.473394  228825 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-169816" is "Ready"
	I1109 14:12:02.473417  228825 pod_ready.go:86] duration metric: took 398.254276ms for pod "kube-scheduler-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:12:02.473428  228825 pod_ready.go:40] duration metric: took 1.60342312s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:12:02.517183  228825 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1109 14:12:02.518632  228825 out.go:203] 
	W1109 14:12:02.519583  228825 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1109 14:12:02.520658  228825 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1109 14:12:02.521932  228825 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-169816" cluster and "default" namespace by default
	I1109 14:12:02.770142  188127 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1109 14:12:02.770240  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1109 14:12:02.770301  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1109 14:12:02.798474  188127 cri.go:89] found id: "51bd0a407228f6671ff6e04c3a75c7f2da7ad5d676a2ba58130c76caf6b7b06d"
	I1109 14:12:02.798494  188127 cri.go:89] found id: "278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	I1109 14:12:02.798499  188127 cri.go:89] found id: ""
	I1109 14:12:02.798507  188127 logs.go:282] 2 containers: [51bd0a407228f6671ff6e04c3a75c7f2da7ad5d676a2ba58130c76caf6b7b06d 278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689]
	I1109 14:12:02.798561  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:12:02.802438  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:12:02.806229  188127 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1109 14:12:02.806293  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1109 14:12:02.831899  188127 cri.go:89] found id: ""
	I1109 14:12:02.831919  188127 logs.go:282] 0 containers: []
	W1109 14:12:02.831926  188127 logs.go:284] No container was found matching "etcd"
	I1109 14:12:02.831931  188127 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1109 14:12:02.831978  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1109 14:12:02.860200  188127 cri.go:89] found id: ""
	I1109 14:12:02.860222  188127 logs.go:282] 0 containers: []
	W1109 14:12:02.860231  188127 logs.go:284] No container was found matching "coredns"
	I1109 14:12:02.860238  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1109 14:12:02.860303  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1109 14:12:02.887418  188127 cri.go:89] found id: "5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:12:02.887442  188127 cri.go:89] found id: ""
	I1109 14:12:02.887451  188127 logs.go:282] 1 containers: [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82]
	I1109 14:12:02.887499  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:12:02.891337  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1109 14:12:02.891412  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1109 14:12:02.918793  188127 cri.go:89] found id: ""
	I1109 14:12:02.918815  188127 logs.go:282] 0 containers: []
	W1109 14:12:02.918824  188127 logs.go:284] No container was found matching "kube-proxy"
	I1109 14:12:02.918831  188127 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1109 14:12:02.918879  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1109 14:12:02.945588  188127 cri.go:89] found id: "4a0e4703bcafc0efbe47c855f92dc5ddd0df48a2ecbe103a980ccdd41ac0da4c"
	I1109 14:12:02.945608  188127 cri.go:89] found id: "b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:12:02.945613  188127 cri.go:89] found id: ""
	I1109 14:12:02.945621  188127 logs.go:282] 2 containers: [4a0e4703bcafc0efbe47c855f92dc5ddd0df48a2ecbe103a980ccdd41ac0da4c b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd]
	I1109 14:12:02.945689  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:12:02.949476  188127 ssh_runner.go:195] Run: which crictl
	I1109 14:12:02.953319  188127 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1109 14:12:02.953363  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1109 14:12:02.980502  188127 cri.go:89] found id: ""
	I1109 14:12:02.980521  188127 logs.go:282] 0 containers: []
	W1109 14:12:02.980527  188127 logs.go:284] No container was found matching "kindnet"
	I1109 14:12:02.980533  188127 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1109 14:12:02.980570  188127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1109 14:12:03.008968  188127 cri.go:89] found id: ""
	I1109 14:12:03.008990  188127 logs.go:282] 0 containers: []
	W1109 14:12:03.008997  188127 logs.go:284] No container was found matching "storage-provisioner"
	I1109 14:12:03.009012  188127 logs.go:123] Gathering logs for dmesg ...
	I1109 14:12:03.009024  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 14:12:03.023410  188127 logs.go:123] Gathering logs for describe nodes ...
	I1109 14:12:03.023438  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1109 14:12:00.843353  234584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:12:01.343300  234584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:12:01.843284  234584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:12:02.343450  234584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:12:02.842842  234584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:12:03.342798  234584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:12:03.843475  234584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:12:04.343108  234584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:12:04.415730  234584 kubeadm.go:1114] duration metric: took 4.657704986s to wait for elevateKubeSystemPrivileges
	I1109 14:12:04.415764  234584 kubeadm.go:403] duration metric: took 15.363248411s to StartCluster
	I1109 14:12:04.415782  234584 settings.go:142] acquiring lock: {Name:mk4e77b290eba64589404bd2a5f48c72505ab262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:12:04.415842  234584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:12:04.417139  234584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:12:04.417354  234584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 14:12:04.417364  234584 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:12:04.417441  234584 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:12:04.417544  234584 addons.go:70] Setting storage-provisioner=true in profile "no-preload-152932"
	I1109 14:12:04.417566  234584 addons.go:239] Setting addon storage-provisioner=true in "no-preload-152932"
	I1109 14:12:04.417576  234584 addons.go:70] Setting default-storageclass=true in profile "no-preload-152932"
	I1109 14:12:04.417599  234584 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-152932"
	I1109 14:12:04.417604  234584 host.go:66] Checking if "no-preload-152932" exists ...
	I1109 14:12:04.417613  234584 config.go:182] Loaded profile config "no-preload-152932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:12:04.417940  234584 cli_runner.go:164] Run: docker container inspect no-preload-152932 --format={{.State.Status}}
	I1109 14:12:04.418090  234584 cli_runner.go:164] Run: docker container inspect no-preload-152932 --format={{.State.Status}}
	I1109 14:12:04.418785  234584 out.go:179] * Verifying Kubernetes components...
	I1109 14:12:04.420192  234584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:12:04.441375  234584 addons.go:239] Setting addon default-storageclass=true in "no-preload-152932"
	I1109 14:12:04.441418  234584 host.go:66] Checking if "no-preload-152932" exists ...
	I1109 14:12:04.441941  234584 cli_runner.go:164] Run: docker container inspect no-preload-152932 --format={{.State.Status}}
	I1109 14:12:04.444128  234584 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:12:04.445472  234584 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:12:04.445492  234584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:12:04.445555  234584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-152932
	I1109 14:12:04.465378  234584 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:12:04.465416  234584 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:12:04.465514  234584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-152932
	I1109 14:12:04.475682  234584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/no-preload-152932/id_rsa Username:docker}
	I1109 14:12:04.489711  234584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/no-preload-152932/id_rsa Username:docker}
	I1109 14:12:04.515015  234584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 14:12:04.563018  234584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:12:04.592774  234584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:12:04.605924  234584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:12:04.708795  234584 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1109 14:12:04.710166  234584 node_ready.go:35] waiting up to 6m0s for node "no-preload-152932" to be "Ready" ...
	I1109 14:12:04.900212  234584 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1109 14:12:04.901154  234584 addons.go:515] duration metric: took 483.721551ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1109 14:12:05.213565  234584 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-152932" context rescaled to 1 replicas
	I1109 14:12:06.587360  188127 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (3.56389735s)
	W1109 14:12:06.587413  188127 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:53816->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:53816->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1109 14:12:06.587427  188127 logs.go:123] Gathering logs for kube-apiserver [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689] ...
	I1109 14:12:06.587440  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	W1109 14:12:06.614731  188127 logs.go:130] failed kube-apiserver [278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689": Process exited with status 1
	stdout:
	
	stderr:
	E1109 14:12:06.612564    5433 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689\": container with ID starting with 278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689 not found: ID does not exist" containerID="278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	time="2025-11-09T14:12:06Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689\": container with ID starting with 278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1109 14:12:06.612564    5433 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689\": container with ID starting with 278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689 not found: ID does not exist" containerID="278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689"
	time="2025-11-09T14:12:06Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689\": container with ID starting with 278f2b7eb3f0b79010a96dc73c8c9ce3277cec1af1ae6ce637a902e8b665f689 not found: ID does not exist"
	
	** /stderr **
	I1109 14:12:06.614752  188127 logs.go:123] Gathering logs for kube-scheduler [5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82] ...
	I1109 14:12:06.614763  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b2b53ebbdb45319390885e8d31b333247bc4cbb21218cdd833c8c2dbe3f6c82"
	I1109 14:12:06.666261  188127 logs.go:123] Gathering logs for kube-controller-manager [4a0e4703bcafc0efbe47c855f92dc5ddd0df48a2ecbe103a980ccdd41ac0da4c] ...
	I1109 14:12:06.666290  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4a0e4703bcafc0efbe47c855f92dc5ddd0df48a2ecbe103a980ccdd41ac0da4c"
	I1109 14:12:06.692458  188127 logs.go:123] Gathering logs for kube-controller-manager [b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd] ...
	I1109 14:12:06.692482  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b30f1faf86e7174f790db058022216ce7d1bcd8d7cb675fa5c2f111674f1b1fd"
	I1109 14:12:06.718078  188127 logs.go:123] Gathering logs for container status ...
	I1109 14:12:06.718099  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 14:12:06.746838  188127 logs.go:123] Gathering logs for kubelet ...
	I1109 14:12:06.746861  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 14:12:06.841224  188127 logs.go:123] Gathering logs for kube-apiserver [51bd0a407228f6671ff6e04c3a75c7f2da7ad5d676a2ba58130c76caf6b7b06d] ...
	I1109 14:12:06.841256  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51bd0a407228f6671ff6e04c3a75c7f2da7ad5d676a2ba58130c76caf6b7b06d"
	I1109 14:12:06.872143  188127 logs.go:123] Gathering logs for CRI-O ...
	I1109 14:12:06.872169  188127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1109 14:12:06.713172  234584 node_ready.go:57] node "no-preload-152932" has "Ready":"False" status (will retry)
	W1109 14:12:08.713559  234584 node_ready.go:57] node "no-preload-152932" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 09 14:12:00 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:00.574802172Z" level=info msg="Starting container: 1a8aa0d8d43e4cf42a3512af1c7812cf0382aec0e9e098c4af50599c157bd079" id=24b2cfb7-21ea-48a0-931f-08170f93b8f9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:12:00 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:00.576537215Z" level=info msg="Started container" PID=2186 containerID=1a8aa0d8d43e4cf42a3512af1c7812cf0382aec0e9e098c4af50599c157bd079 description=kube-system/coredns-5dd5756b68-5bgfs/coredns id=24b2cfb7-21ea-48a0-931f-08170f93b8f9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=52ac5a833738bab88fa3026a51173346a1fa54cacee22b6f893de2d911f2d701
	Nov 09 14:12:02 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:02.96677716Z" level=info msg="Running pod sandbox: default/busybox/POD" id=1e94c50d-824e-43f3-82ba-b0f2272468b0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:12:02 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:02.966853881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:12:02 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:02.972362765Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fc9e59c01c05fd4ab373af6d6a8a87646af40643c925ccccab4381d5f1575e75 UID:b8660e9d-e2a4-48ea-806d-dbea8dc9c026 NetNS:/var/run/netns/d2a142ef-28af-42b3-85eb-08c2a901e689 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b638}] Aliases:map[]}"
	Nov 09 14:12:02 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:02.972389081Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 09 14:12:02 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:02.982695044Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fc9e59c01c05fd4ab373af6d6a8a87646af40643c925ccccab4381d5f1575e75 UID:b8660e9d-e2a4-48ea-806d-dbea8dc9c026 NetNS:/var/run/netns/d2a142ef-28af-42b3-85eb-08c2a901e689 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b638}] Aliases:map[]}"
	Nov 09 14:12:02 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:02.98282847Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 09 14:12:02 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:02.983500979Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 09 14:12:02 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:02.984283124Z" level=info msg="Ran pod sandbox fc9e59c01c05fd4ab373af6d6a8a87646af40643c925ccccab4381d5f1575e75 with infra container: default/busybox/POD" id=1e94c50d-824e-43f3-82ba-b0f2272468b0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:12:02 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:02.985434284Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c58084ea-9e46-4eca-8217-a6e8b34b1437 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:12:02 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:02.985531763Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c58084ea-9e46-4eca-8217-a6e8b34b1437 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:12:02 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:02.98556187Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c58084ea-9e46-4eca-8217-a6e8b34b1437 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:12:02 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:02.986088879Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6cbf02cc-0a1c-49c3-a36b-45084bfa07d0 name=/runtime.v1.ImageService/PullImage
	Nov 09 14:12:02 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:02.987597403Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 09 14:12:03 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:03.690919827Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=6cbf02cc-0a1c-49c3-a36b-45084bfa07d0 name=/runtime.v1.ImageService/PullImage
	Nov 09 14:12:03 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:03.691504897Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e910f7cf-6be4-4f06-9b52-084088074865 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:12:03 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:03.692818755Z" level=info msg="Creating container: default/busybox/busybox" id=be0d9936-5349-440d-911a-1981fbfd6f9f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:12:03 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:03.692946063Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:12:03 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:03.696575831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:12:03 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:03.697034087Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:12:03 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:03.720411369Z" level=info msg="Created container 9c3bb7ea0db2649f5b3f39a2590270b065b91d3cb90d9fa4520299b97317a0ac: default/busybox/busybox" id=be0d9936-5349-440d-911a-1981fbfd6f9f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:12:03 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:03.720830982Z" level=info msg="Starting container: 9c3bb7ea0db2649f5b3f39a2590270b065b91d3cb90d9fa4520299b97317a0ac" id=fb4412a3-bc69-46c4-ae9f-70dd460188f9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:12:03 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:03.722320326Z" level=info msg="Started container" PID=2265 containerID=9c3bb7ea0db2649f5b3f39a2590270b065b91d3cb90d9fa4520299b97317a0ac description=default/busybox/busybox id=fb4412a3-bc69-46c4-ae9f-70dd460188f9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fc9e59c01c05fd4ab373af6d6a8a87646af40643c925ccccab4381d5f1575e75
	Nov 09 14:12:10 old-k8s-version-169816 crio[778]: time="2025-11-09T14:12:10.740213992Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	9c3bb7ea0db26       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   fc9e59c01c05f       busybox                                          default
	1a8aa0d8d43e4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      11 seconds ago      Running             coredns                   0                   52ac5a833738b       coredns-5dd5756b68-5bgfs                         kube-system
	50000674c9ced       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   1d76ddd028a14       storage-provisioner                              kube-system
	d9562d15ef1b1       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   ce0c390409b07       kindnet-mjzvm                                    kube-system
	ae8bb3473fc6e       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      23 seconds ago      Running             kube-proxy                0                   dbc90fc24e5e8       kube-proxy-96xbm                                 kube-system
	265fe32826c5e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      43 seconds ago      Running             etcd                      0                   02d78c581723d       etcd-old-k8s-version-169816                      kube-system
	980e10c99a7c6       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      43 seconds ago      Running             kube-scheduler            0                   d3f5f29086abf       kube-scheduler-old-k8s-version-169816            kube-system
	262774ff2a92e       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      43 seconds ago      Running             kube-apiserver            0                   7169a7b4f4b03       kube-apiserver-old-k8s-version-169816            kube-system
	839e779b822e1       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      43 seconds ago      Running             kube-controller-manager   0                   62be76880ad44       kube-controller-manager-old-k8s-version-169816   kube-system
	
	
	==> coredns [1a8aa0d8d43e4cf42a3512af1c7812cf0382aec0e9e098c4af50599c157bd079] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58265 - 30116 "HINFO IN 2323400447280476016.7980731037616625456. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.923406985s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-169816
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-169816
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=old-k8s-version-169816
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_11_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:11:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-169816
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:12:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:12:03 +0000   Sun, 09 Nov 2025 14:11:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:12:03 +0000   Sun, 09 Nov 2025 14:11:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:12:03 +0000   Sun, 09 Nov 2025 14:11:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:12:03 +0000   Sun, 09 Nov 2025 14:12:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-169816
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                11632483-d582-4ced-bfcd-ac7706e38a54
	  Boot ID:                    f61b57f8-a461-4dd6-b921-37a11bd88f0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-5bgfs                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-old-k8s-version-169816                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39s
	  kube-system                 kindnet-mjzvm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-old-k8s-version-169816             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-169816    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-96xbm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-old-k8s-version-169816             100m (1%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 44s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  44s (x9 over 44s)  kubelet          Node old-k8s-version-169816 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node old-k8s-version-169816 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x7 over 44s)  kubelet          Node old-k8s-version-169816 status is now: NodeHasSufficientPID
	  Normal  Starting                 39s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s                kubelet          Node old-k8s-version-169816 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s                kubelet          Node old-k8s-version-169816 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s                kubelet          Node old-k8s-version-169816 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node old-k8s-version-169816 event: Registered Node old-k8s-version-169816 in Controller
	  Normal  NodeReady                12s                kubelet          Node old-k8s-version-169816 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.090313] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.571631] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 9 13:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.019189] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023897] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +2.047759] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +4.031604] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +8.447123] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[ +16.382306] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 13:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	
	
	==> etcd [265fe32826c5e86cf6eede3a40fd01786c6337f9b4ba436c0cf98d01294b149f] <==
	{"level":"info","ts":"2025-11-09T14:11:28.765905Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-09T14:11:28.766007Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-09T14:11:28.766076Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-09T14:11:28.766115Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-09T14:11:28.766161Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-09T14:11:29.555415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-09T14:11:29.555458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-09T14:11:29.555479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-09T14:11:29.555508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-09T14:11:29.555514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-09T14:11:29.555522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-09T14:11:29.555529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-09T14:11:29.55632Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-09T14:11:29.557059Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-09T14:11:29.557058Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-169816 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-09T14:11:29.557085Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-09T14:11:29.557267Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-09T14:11:29.557295Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-09T14:11:29.55731Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-09T14:11:29.557362Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-09T14:11:29.557386Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-09T14:11:29.558438Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-09T14:11:29.558481Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-09T14:11:32.310091Z","caller":"traceutil/trace.go:171","msg":"trace[665937835] transaction","detail":"{read_only:false; response_revision:230; number_of_response:1; }","duration":"131.165549ms","start":"2025-11-09T14:11:32.178889Z","end":"2025-11-09T14:11:32.310055Z","steps":["trace[665937835] 'process raft request'  (duration: 126.740417ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:11:45.732936Z","caller":"traceutil/trace.go:171","msg":"trace[1041267968] transaction","detail":"{read_only:false; response_revision:321; number_of_response:1; }","duration":"117.715559ms","start":"2025-11-09T14:11:45.615191Z","end":"2025-11-09T14:11:45.732906Z","steps":["trace[1041267968] 'process raft request'  (duration: 52.351447ms)","trace[1041267968] 'compare'  (duration: 65.199715ms)"],"step_count":2}
	
	
	==> kernel <==
	 14:12:12 up 54 min,  0 user,  load average: 3.52, 2.94, 1.80
	Linux old-k8s-version-169816 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d9562d15ef1b1ff6e8e179ee015a23b10ee628441949596f6a860bb8b7044938] <==
	I1109 14:11:49.852523       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:11:49.852785       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1109 14:11:49.852940       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:11:49.852957       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:11:49.852980       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:11:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:11:50.054845       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:11:50.054875       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:11:50.054888       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:11:50.055155       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1109 14:11:50.455762       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:11:50.455781       1 metrics.go:72] Registering metrics
	I1109 14:11:50.455842       1 controller.go:711] "Syncing nftables rules"
	I1109 14:12:00.056506       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1109 14:12:00.056562       1 main.go:301] handling current node
	I1109 14:12:10.060093       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1109 14:12:10.060120       1 main.go:301] handling current node
	
	
	==> kube-apiserver [262774ff2a92e3eb382b35d723e9011f1d845c4b48640b86ed17ed0ec117c2d7] <==
	I1109 14:11:30.661923       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:11:30.662112       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1109 14:11:30.663221       1 controller.go:624] quota admission added evaluator for: namespaces
	I1109 14:11:30.664693       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1109 14:11:30.665439       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1109 14:11:30.665471       1 aggregator.go:166] initial CRD sync complete...
	I1109 14:11:30.665479       1 autoregister_controller.go:141] Starting autoregister controller
	I1109 14:11:30.665485       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:11:30.665493       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:11:30.670858       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:11:31.566542       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1109 14:11:31.569971       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1109 14:11:31.569986       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:11:32.032888       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:11:32.084163       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:11:32.310723       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1109 14:11:32.320135       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1109 14:11:32.321495       1 controller.go:624] quota admission added evaluator for: endpoints
	I1109 14:11:32.342768       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:11:32.590660       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1109 14:11:33.614434       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1109 14:11:33.622612       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1109 14:11:33.630348       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1109 14:11:47.656127       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1109 14:11:47.710266       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [839e779b822e1fb4d99941e8fd2c4b5648b3f0457bef1d0995b77c36a1798c2f] <==
	I1109 14:11:46.902036       1 shared_informer.go:318] Caches are synced for deployment
	I1109 14:11:46.950187       1 shared_informer.go:318] Caches are synced for HPA
	I1109 14:11:46.958344       1 shared_informer.go:318] Caches are synced for resource quota
	I1109 14:11:46.970487       1 shared_informer.go:318] Caches are synced for disruption
	I1109 14:11:47.006835       1 shared_informer.go:318] Caches are synced for resource quota
	I1109 14:11:47.330470       1 shared_informer.go:318] Caches are synced for garbage collector
	I1109 14:11:47.342817       1 shared_informer.go:318] Caches are synced for garbage collector
	I1109 14:11:47.343125       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1109 14:11:47.668909       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-96xbm"
	I1109 14:11:47.672393       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mjzvm"
	I1109 14:11:47.714981       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1109 14:11:47.743379       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1109 14:11:47.809284       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-6dz6x"
	I1109 14:11:47.815307       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-5bgfs"
	I1109 14:11:47.827778       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="111.485916ms"
	I1109 14:11:47.837993       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-6dz6x"
	I1109 14:11:47.844485       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.652517ms"
	I1109 14:11:47.853440       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.897189ms"
	I1109 14:11:47.853619       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.019µs"
	I1109 14:12:00.231371       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.993µs"
	I1109 14:12:00.247098       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.334µs"
	I1109 14:12:00.765885       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.52µs"
	I1109 14:12:00.790072       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.718942ms"
	I1109 14:12:00.790246       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.641µs"
	I1109 14:12:01.779946       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [ae8bb3473fc6efc31aa076458442cc88227da76d798da35f83c2fc3c2a611819] <==
	I1109 14:11:48.059968       1 server_others.go:69] "Using iptables proxy"
	I1109 14:11:48.069303       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1109 14:11:48.089135       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:11:48.091464       1 server_others.go:152] "Using iptables Proxier"
	I1109 14:11:48.091492       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1109 14:11:48.091500       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1109 14:11:48.091537       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1109 14:11:48.091789       1 server.go:846] "Version info" version="v1.28.0"
	I1109 14:11:48.091814       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:11:48.092899       1 config.go:97] "Starting endpoint slice config controller"
	I1109 14:11:48.092942       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1109 14:11:48.093082       1 config.go:188] "Starting service config controller"
	I1109 14:11:48.093097       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1109 14:11:48.093674       1 config.go:315] "Starting node config controller"
	I1109 14:11:48.093695       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1109 14:11:48.193502       1 shared_informer.go:318] Caches are synced for service config
	I1109 14:11:48.193531       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1109 14:11:48.193774       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [980e10c99a7c6f0d0788ed56931953b529e19374143f1f08f3ea0d0d175e3fb1] <==
	W1109 14:11:30.638728       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1109 14:11:30.638752       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1109 14:11:30.639266       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1109 14:11:30.639284       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1109 14:11:31.521466       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1109 14:11:31.521506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1109 14:11:31.555563       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1109 14:11:31.555606       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1109 14:11:31.583232       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1109 14:11:31.583269       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1109 14:11:31.648323       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1109 14:11:31.648357       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1109 14:11:31.682020       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1109 14:11:31.682062       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1109 14:11:31.718831       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1109 14:11:31.718885       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1109 14:11:31.722237       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1109 14:11:31.722274       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1109 14:11:31.791868       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1109 14:11:31.791962       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1109 14:11:31.816393       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1109 14:11:31.816427       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 14:11:31.835009       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1109 14:11:31.835046       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1109 14:11:34.333847       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 09 14:11:46 old-k8s-version-169816 kubelet[1420]: I1109 14:11:46.849104    1420 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 09 14:11:47 old-k8s-version-169816 kubelet[1420]: I1109 14:11:47.681207    1420 topology_manager.go:215] "Topology Admit Handler" podUID="75cd36ca-24dd-42c7-832f-af68236aa60b" podNamespace="kube-system" podName="kube-proxy-96xbm"
	Nov 09 14:11:47 old-k8s-version-169816 kubelet[1420]: I1109 14:11:47.681410    1420 topology_manager.go:215] "Topology Admit Handler" podUID="e7039a8f-f22c-4c68-b80d-ae4be93c9336" podNamespace="kube-system" podName="kindnet-mjzvm"
	Nov 09 14:11:47 old-k8s-version-169816 kubelet[1420]: I1109 14:11:47.770710    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75cd36ca-24dd-42c7-832f-af68236aa60b-xtables-lock\") pod \"kube-proxy-96xbm\" (UID: \"75cd36ca-24dd-42c7-832f-af68236aa60b\") " pod="kube-system/kube-proxy-96xbm"
	Nov 09 14:11:47 old-k8s-version-169816 kubelet[1420]: I1109 14:11:47.770764    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e7039a8f-f22c-4c68-b80d-ae4be93c9336-cni-cfg\") pod \"kindnet-mjzvm\" (UID: \"e7039a8f-f22c-4c68-b80d-ae4be93c9336\") " pod="kube-system/kindnet-mjzvm"
	Nov 09 14:11:47 old-k8s-version-169816 kubelet[1420]: I1109 14:11:47.770833    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/75cd36ca-24dd-42c7-832f-af68236aa60b-kube-proxy\") pod \"kube-proxy-96xbm\" (UID: \"75cd36ca-24dd-42c7-832f-af68236aa60b\") " pod="kube-system/kube-proxy-96xbm"
	Nov 09 14:11:47 old-k8s-version-169816 kubelet[1420]: I1109 14:11:47.770876    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75cd36ca-24dd-42c7-832f-af68236aa60b-lib-modules\") pod \"kube-proxy-96xbm\" (UID: \"75cd36ca-24dd-42c7-832f-af68236aa60b\") " pod="kube-system/kube-proxy-96xbm"
	Nov 09 14:11:47 old-k8s-version-169816 kubelet[1420]: I1109 14:11:47.770905    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7039a8f-f22c-4c68-b80d-ae4be93c9336-xtables-lock\") pod \"kindnet-mjzvm\" (UID: \"e7039a8f-f22c-4c68-b80d-ae4be93c9336\") " pod="kube-system/kindnet-mjzvm"
	Nov 09 14:11:47 old-k8s-version-169816 kubelet[1420]: I1109 14:11:47.770940    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frdp5\" (UniqueName: \"kubernetes.io/projected/e7039a8f-f22c-4c68-b80d-ae4be93c9336-kube-api-access-frdp5\") pod \"kindnet-mjzvm\" (UID: \"e7039a8f-f22c-4c68-b80d-ae4be93c9336\") " pod="kube-system/kindnet-mjzvm"
	Nov 09 14:11:47 old-k8s-version-169816 kubelet[1420]: I1109 14:11:47.771003    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gphq8\" (UniqueName: \"kubernetes.io/projected/75cd36ca-24dd-42c7-832f-af68236aa60b-kube-api-access-gphq8\") pod \"kube-proxy-96xbm\" (UID: \"75cd36ca-24dd-42c7-832f-af68236aa60b\") " pod="kube-system/kube-proxy-96xbm"
	Nov 09 14:11:47 old-k8s-version-169816 kubelet[1420]: I1109 14:11:47.771037    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7039a8f-f22c-4c68-b80d-ae4be93c9336-lib-modules\") pod \"kindnet-mjzvm\" (UID: \"e7039a8f-f22c-4c68-b80d-ae4be93c9336\") " pod="kube-system/kindnet-mjzvm"
	Nov 09 14:11:48 old-k8s-version-169816 kubelet[1420]: I1109 14:11:48.747322    1420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-96xbm" podStartSLOduration=1.747272333 podCreationTimestamp="2025-11-09 14:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:11:48.747099864 +0000 UTC m=+15.155697914" watchObservedRunningTime="2025-11-09 14:11:48.747272333 +0000 UTC m=+15.155870377"
	Nov 09 14:11:49 old-k8s-version-169816 kubelet[1420]: I1109 14:11:49.744816    1420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-mjzvm" podStartSLOduration=1.053367592 podCreationTimestamp="2025-11-09 14:11:47 +0000 UTC" firstStartedPulling="2025-11-09 14:11:47.99082273 +0000 UTC m=+14.399420760" lastFinishedPulling="2025-11-09 14:11:49.682220022 +0000 UTC m=+16.090818066" observedRunningTime="2025-11-09 14:11:49.744653107 +0000 UTC m=+16.153251151" watchObservedRunningTime="2025-11-09 14:11:49.744764898 +0000 UTC m=+16.153362949"
	Nov 09 14:12:00 old-k8s-version-169816 kubelet[1420]: I1109 14:12:00.209624    1420 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 09 14:12:00 old-k8s-version-169816 kubelet[1420]: I1109 14:12:00.230478    1420 topology_manager.go:215] "Topology Admit Handler" podUID="3a49fdf1-199e-4d41-978b-a0fb1b33155b" podNamespace="kube-system" podName="storage-provisioner"
	Nov 09 14:12:00 old-k8s-version-169816 kubelet[1420]: I1109 14:12:00.231384    1420 topology_manager.go:215] "Topology Admit Handler" podUID="902d0a81-0e13-4f1b-a2cb-b00a65fcf0cf" podNamespace="kube-system" podName="coredns-5dd5756b68-5bgfs"
	Nov 09 14:12:00 old-k8s-version-169816 kubelet[1420]: I1109 14:12:00.266081    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmvcw\" (UniqueName: \"kubernetes.io/projected/902d0a81-0e13-4f1b-a2cb-b00a65fcf0cf-kube-api-access-zmvcw\") pod \"coredns-5dd5756b68-5bgfs\" (UID: \"902d0a81-0e13-4f1b-a2cb-b00a65fcf0cf\") " pod="kube-system/coredns-5dd5756b68-5bgfs"
	Nov 09 14:12:00 old-k8s-version-169816 kubelet[1420]: I1109 14:12:00.266119    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/902d0a81-0e13-4f1b-a2cb-b00a65fcf0cf-config-volume\") pod \"coredns-5dd5756b68-5bgfs\" (UID: \"902d0a81-0e13-4f1b-a2cb-b00a65fcf0cf\") " pod="kube-system/coredns-5dd5756b68-5bgfs"
	Nov 09 14:12:00 old-k8s-version-169816 kubelet[1420]: I1109 14:12:00.266139    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjjqb\" (UniqueName: \"kubernetes.io/projected/3a49fdf1-199e-4d41-978b-a0fb1b33155b-kube-api-access-kjjqb\") pod \"storage-provisioner\" (UID: \"3a49fdf1-199e-4d41-978b-a0fb1b33155b\") " pod="kube-system/storage-provisioner"
	Nov 09 14:12:00 old-k8s-version-169816 kubelet[1420]: I1109 14:12:00.266158    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3a49fdf1-199e-4d41-978b-a0fb1b33155b-tmp\") pod \"storage-provisioner\" (UID: \"3a49fdf1-199e-4d41-978b-a0fb1b33155b\") " pod="kube-system/storage-provisioner"
	Nov 09 14:12:00 old-k8s-version-169816 kubelet[1420]: I1109 14:12:00.765632    1420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-5bgfs" podStartSLOduration=13.765584046 podCreationTimestamp="2025-11-09 14:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:12:00.76556583 +0000 UTC m=+27.174163879" watchObservedRunningTime="2025-11-09 14:12:00.765584046 +0000 UTC m=+27.174182095"
	Nov 09 14:12:00 old-k8s-version-169816 kubelet[1420]: I1109 14:12:00.783580    1420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.783533789 podCreationTimestamp="2025-11-09 14:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:12:00.774367757 +0000 UTC m=+27.182965808" watchObservedRunningTime="2025-11-09 14:12:00.783533789 +0000 UTC m=+27.192131877"
	Nov 09 14:12:02 old-k8s-version-169816 kubelet[1420]: I1109 14:12:02.664787    1420 topology_manager.go:215] "Topology Admit Handler" podUID="b8660e9d-e2a4-48ea-806d-dbea8dc9c026" podNamespace="default" podName="busybox"
	Nov 09 14:12:02 old-k8s-version-169816 kubelet[1420]: I1109 14:12:02.680077    1420 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qz9x\" (UniqueName: \"kubernetes.io/projected/b8660e9d-e2a4-48ea-806d-dbea8dc9c026-kube-api-access-9qz9x\") pod \"busybox\" (UID: \"b8660e9d-e2a4-48ea-806d-dbea8dc9c026\") " pod="default/busybox"
	Nov 09 14:12:03 old-k8s-version-169816 kubelet[1420]: I1109 14:12:03.771728    1420 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.066319029 podCreationTimestamp="2025-11-09 14:12:02 +0000 UTC" firstStartedPulling="2025-11-09 14:12:02.985734683 +0000 UTC m=+29.394332715" lastFinishedPulling="2025-11-09 14:12:03.691096147 +0000 UTC m=+30.099694182" observedRunningTime="2025-11-09 14:12:03.77145085 +0000 UTC m=+30.180048904" watchObservedRunningTime="2025-11-09 14:12:03.771680496 +0000 UTC m=+30.180278545"
	
	
	==> storage-provisioner [50000674c9ced39f8f55d37e293fa243270f263b805fc5d36e6323e1e657b6b4] <==
	I1109 14:12:00.585424       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:12:00.593848       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:12:00.593960       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1109 14:12:00.600566       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:12:00.600681       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c4c27452-9f34-4b03-8815-bd5ff2390444", APIVersion:"v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-169816_dfd51d1f-dbaa-487d-b681-f56d17aaf128 became leader
	I1109 14:12:00.600760       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-169816_dfd51d1f-dbaa-487d-b681-f56d17aaf128!
	I1109 14:12:00.701690       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-169816_dfd51d1f-dbaa-487d-b681-f56d17aaf128!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-169816 -n old-k8s-version-169816
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-169816 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-152932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-152932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (259.189633ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:12:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-152932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-152932 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-152932 describe deploy/metrics-server -n kube-system: exit status 1 (68.842705ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-152932 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-152932
helpers_test.go:243: (dbg) docker inspect no-preload-152932:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf",
	        "Created": "2025-11-09T14:11:31.387722642Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 235036,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:11:31.419657437Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf/hostname",
	        "HostsPath": "/var/lib/docker/containers/026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf/hosts",
	        "LogPath": "/var/lib/docker/containers/026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf/026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf-json.log",
	        "Name": "/no-preload-152932",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-152932:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-152932",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf",
	                "LowerDir": "/var/lib/docker/overlay2/77d2d24ba23adce1f064070530181e0a04c4153d6213564ce1e1eac896b7ce51-init/diff:/var/lib/docker/overlay2/d0c6d4b4ffbfb6af64ec3408030fc54111fe30d500088a5ae947d598cfc72f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/77d2d24ba23adce1f064070530181e0a04c4153d6213564ce1e1eac896b7ce51/merged",
	                "UpperDir": "/var/lib/docker/overlay2/77d2d24ba23adce1f064070530181e0a04c4153d6213564ce1e1eac896b7ce51/diff",
	                "WorkDir": "/var/lib/docker/overlay2/77d2d24ba23adce1f064070530181e0a04c4153d6213564ce1e1eac896b7ce51/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-152932",
	                "Source": "/var/lib/docker/volumes/no-preload-152932/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-152932",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-152932",
	                "name.minikube.sigs.k8s.io": "no-preload-152932",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "687a1c95fd114654c61ccaa595d18575658bb5aa84681cdca7cc4b7e9d0b81a8",
	            "SandboxKey": "/var/run/docker/netns/687a1c95fd11",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-152932": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:dd:7f:49:8f:f9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1c509180f30963f7e773167a4898cba178d323dd41609baf99fe1db9a86f38a9",
	                    "EndpointID": "505f4414539b17ea0f19f8d6f77786ecbe8f4ae0f9715884f3be7ba13530794c",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-152932",
	                        "026fe7b1acd1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-152932 -n no-preload-152932
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-152932 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-152932 logs -n 25: (1.11819071s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-593530 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo containerd config dump                                                                                                                                                                                                  │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ ssh     │ -p cilium-593530 sudo crio config                                                                                                                                                                                                             │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │                     │
	│ delete  │ -p cilium-593530                                                                                                                                                                                                                              │ cilium-593530          │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │ 09 Nov 25 14:10 UTC │
	│ start   │ -p cert-options-350702 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-350702    │ jenkins │ v1.37.0 │ 09 Nov 25 14:10 UTC │ 09 Nov 25 14:11 UTC │
	│ ssh     │ cert-options-350702 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-350702    │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ ssh     │ -p cert-options-350702 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-350702    │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ delete  │ -p cert-options-350702                                                                                                                                                                                                                        │ cert-options-350702    │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ start   │ -p pause-092489 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-092489           │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ start   │ -p old-k8s-version-169816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-169816 │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:12 UTC │
	│ pause   │ -p pause-092489 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-092489           │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │                     │
	│ delete  │ -p pause-092489                                                                                                                                                                                                                               │ pause-092489           │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ start   │ -p no-preload-152932 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-152932      │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:12 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-169816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-169816 │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │                     │
	│ stop    │ -p old-k8s-version-169816 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-169816 │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p cert-expiration-883873 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-883873 │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-152932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-152932      │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-169816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-169816 │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:12:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:12:26.555721  242807 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:12:26.555963  242807 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:12:26.555966  242807 out.go:374] Setting ErrFile to fd 2...
	I1109 14:12:26.555969  242807 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:12:26.556147  242807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:12:26.556531  242807 out.go:368] Setting JSON to false
	I1109 14:12:26.557594  242807 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3296,"bootTime":1762694250,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:12:26.557688  242807 start.go:143] virtualization: kvm guest
	I1109 14:12:26.559860  242807 out.go:179] * [cert-expiration-883873] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:12:26.561396  242807 notify.go:221] Checking for updates...
	I1109 14:12:26.561418  242807 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:12:26.562508  242807 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:12:26.563851  242807 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:12:26.564935  242807 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 14:12:26.565965  242807 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:12:26.566990  242807 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:12:26.568264  242807 config.go:182] Loaded profile config "cert-expiration-883873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:12:26.568737  242807 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:12:26.592889  242807 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 14:12:26.593000  242807 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:12:26.648447  242807 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:88 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-09 14:12:26.638799607 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:12:26.648541  242807 docker.go:319] overlay module found
	I1109 14:12:26.650038  242807 out.go:179] * Using the docker driver based on existing profile
	I1109 14:12:26.651074  242807 start.go:309] selected driver: docker
	I1109 14:12:26.651081  242807 start.go:930] validating driver "docker" against &{Name:cert-expiration-883873 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-883873 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:12:26.651204  242807 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:12:26.652015  242807 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:12:26.709715  242807 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:88 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-09 14:12:26.698106428 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:12:26.709954  242807 cni.go:84] Creating CNI manager for ""
	I1109 14:12:26.710007  242807 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:12:26.710043  242807 start.go:353] cluster config:
	{Name:cert-expiration-883873 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-883873 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1109 14:12:26.711574  242807 out.go:179] * Starting "cert-expiration-883873" primary control-plane node in "cert-expiration-883873" cluster
	I1109 14:12:26.712574  242807 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:12:26.713653  242807 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:12:26.714623  242807 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:12:26.714652  242807 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 14:12:26.714670  242807 cache.go:65] Caching tarball of preloaded images
	I1109 14:12:26.714681  242807 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:12:26.714758  242807 preload.go:238] Found /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 14:12:26.714767  242807 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:12:26.714849  242807 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/cert-expiration-883873/config.json ...
	I1109 14:12:26.734427  242807 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:12:26.734437  242807 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:12:26.734451  242807 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:12:26.734475  242807 start.go:360] acquireMachinesLock for cert-expiration-883873: {Name:mk30392ba73f000a9e842d4bb247cf0f9423738a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:12:26.734530  242807 start.go:364] duration metric: took 38.112µs to acquireMachinesLock for "cert-expiration-883873"
	I1109 14:12:26.734545  242807 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:12:26.734550  242807 fix.go:54] fixHost starting: 
	I1109 14:12:26.734789  242807 cli_runner.go:164] Run: docker container inspect cert-expiration-883873 --format={{.State.Status}}
	I1109 14:12:26.750968  242807 fix.go:112] recreateIfNeeded on cert-expiration-883873: state=Running err=<nil>
	W1109 14:12:26.750985  242807 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 09 14:12:17 no-preload-152932 crio[767]: time="2025-11-09T14:12:17.482999664Z" level=info msg="Starting container: 605e3bdb8206a232e468ab34e13d368e1b78577160d7cba6ffa0feaace892e53" id=f86e8537-cf8f-4401-9bdf-8379e57e6d38 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:12:17 no-preload-152932 crio[767]: time="2025-11-09T14:12:17.484711725Z" level=info msg="Started container" PID=2894 containerID=605e3bdb8206a232e468ab34e13d368e1b78577160d7cba6ffa0feaace892e53 description=kube-system/coredns-66bc5c9577-6ssc5/coredns id=f86e8537-cf8f-4401-9bdf-8379e57e6d38 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ec1a1e0809623fb1c14652e3ef14fccab69030dd08204ffe257399a5fce2cb29
	Nov 09 14:12:20 no-preload-152932 crio[767]: time="2025-11-09T14:12:20.354071075Z" level=info msg="Running pod sandbox: default/busybox/POD" id=2604989a-a7ca-4bdb-bc58-9e60c8cfedd1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:12:20 no-preload-152932 crio[767]: time="2025-11-09T14:12:20.354160414Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:12:20 no-preload-152932 crio[767]: time="2025-11-09T14:12:20.359716845Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6cb65339b90c0d99e035168591ea90cb6394285b20ef9658c0b63d11c60b9bfa UID:84072b78-3173-4704-8820-d187e9262dd9 NetNS:/var/run/netns/d7e1cf90-5cab-43d7-8526-588d47e58365 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005208e0}] Aliases:map[]}"
	Nov 09 14:12:20 no-preload-152932 crio[767]: time="2025-11-09T14:12:20.359745735Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 09 14:12:20 no-preload-152932 crio[767]: time="2025-11-09T14:12:20.369435044Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6cb65339b90c0d99e035168591ea90cb6394285b20ef9658c0b63d11c60b9bfa UID:84072b78-3173-4704-8820-d187e9262dd9 NetNS:/var/run/netns/d7e1cf90-5cab-43d7-8526-588d47e58365 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005208e0}] Aliases:map[]}"
	Nov 09 14:12:20 no-preload-152932 crio[767]: time="2025-11-09T14:12:20.369545601Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 09 14:12:20 no-preload-152932 crio[767]: time="2025-11-09T14:12:20.370274118Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 09 14:12:20 no-preload-152932 crio[767]: time="2025-11-09T14:12:20.371201553Z" level=info msg="Ran pod sandbox 6cb65339b90c0d99e035168591ea90cb6394285b20ef9658c0b63d11c60b9bfa with infra container: default/busybox/POD" id=2604989a-a7ca-4bdb-bc58-9e60c8cfedd1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:12:20 no-preload-152932 crio[767]: time="2025-11-09T14:12:20.372285355Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7e33c7b9-568a-4cae-8b2d-891f592028d1 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:12:20 no-preload-152932 crio[767]: time="2025-11-09T14:12:20.372399178Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7e33c7b9-568a-4cae-8b2d-891f592028d1 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:12:20 no-preload-152932 crio[767]: time="2025-11-09T14:12:20.372431885Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=7e33c7b9-568a-4cae-8b2d-891f592028d1 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:12:20 no-preload-152932 crio[767]: time="2025-11-09T14:12:20.372943978Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=371abf2e-1a0b-4ef0-ad06-9e971278521f name=/runtime.v1.ImageService/PullImage
	Nov 09 14:12:20 no-preload-152932 crio[767]: time="2025-11-09T14:12:20.374254229Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 09 14:12:21 no-preload-152932 crio[767]: time="2025-11-09T14:12:21.123916771Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=371abf2e-1a0b-4ef0-ad06-9e971278521f name=/runtime.v1.ImageService/PullImage
	Nov 09 14:12:21 no-preload-152932 crio[767]: time="2025-11-09T14:12:21.1243566Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1074924e-00f6-497e-9fd7-3943a89a825d name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:12:21 no-preload-152932 crio[767]: time="2025-11-09T14:12:21.125443484Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=782a5529-5c73-4e29-8da1-fa52ea7131fc name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:12:21 no-preload-152932 crio[767]: time="2025-11-09T14:12:21.128291913Z" level=info msg="Creating container: default/busybox/busybox" id=fddb5238-e567-4226-a98a-e1dc750f8901 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:12:21 no-preload-152932 crio[767]: time="2025-11-09T14:12:21.128405508Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:12:21 no-preload-152932 crio[767]: time="2025-11-09T14:12:21.131621566Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:12:21 no-preload-152932 crio[767]: time="2025-11-09T14:12:21.132014736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:12:21 no-preload-152932 crio[767]: time="2025-11-09T14:12:21.156430631Z" level=info msg="Created container 4873caf91a15cd830ab09b2f47dc429d225c773813a977290505e97ba5ab5e00: default/busybox/busybox" id=fddb5238-e567-4226-a98a-e1dc750f8901 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:12:21 no-preload-152932 crio[767]: time="2025-11-09T14:12:21.156841331Z" level=info msg="Starting container: 4873caf91a15cd830ab09b2f47dc429d225c773813a977290505e97ba5ab5e00" id=ea662006-28e1-425a-b8b9-ef7b7fa4b3c6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:12:21 no-preload-152932 crio[767]: time="2025-11-09T14:12:21.158420138Z" level=info msg="Started container" PID=2968 containerID=4873caf91a15cd830ab09b2f47dc429d225c773813a977290505e97ba5ab5e00 description=default/busybox/busybox id=ea662006-28e1-425a-b8b9-ef7b7fa4b3c6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6cb65339b90c0d99e035168591ea90cb6394285b20ef9658c0b63d11c60b9bfa
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4873caf91a15c       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   6cb65339b90c0       busybox                                     default
	605e3bdb8206a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   ec1a1e0809623       coredns-66bc5c9577-6ssc5                    kube-system
	31924232a626f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   1b7666988601b       storage-provisioner                         kube-system
	7900799d6a7b8       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   f7d4fcd9644d0       kindnet-qk599                               kube-system
	34aa8de3d8dd1       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   4171f5325ab7c       kube-proxy-f5tgg                            kube-system
	eb554e0fb4c35       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   92c3d0131a0d1       etcd-no-preload-152932                      kube-system
	a35c05b752be2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   e832f73a39400       kube-scheduler-no-preload-152932            kube-system
	31ab6cb262b0a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   abf7670502759       kube-apiserver-no-preload-152932            kube-system
	13f394ec4c85c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   7beb1165b41ce       kube-controller-manager-no-preload-152932   kube-system
	
	
	==> coredns [605e3bdb8206a232e468ab34e13d368e1b78577160d7cba6ffa0feaace892e53] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59953 - 43276 "HINFO IN 3895814667325221947.4480425808991397012. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.137822339s
	
	
	==> describe nodes <==
	Name:               no-preload-152932
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-152932
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=no-preload-152932
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_11_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:11:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-152932
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:12:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:12:29 +0000   Sun, 09 Nov 2025 14:11:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:12:29 +0000   Sun, 09 Nov 2025 14:11:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:12:29 +0000   Sun, 09 Nov 2025 14:11:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:12:29 +0000   Sun, 09 Nov 2025 14:12:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-152932
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                32c8871f-6491-4e30-8669-8cd62f18ad7c
	  Boot ID:                    f61b57f8-a461-4dd6-b921-37a11bd88f0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-6ssc5                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-152932                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-qk599                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-152932             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-152932    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-f5tgg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-152932             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node no-preload-152932 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node no-preload-152932 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x8 over 35s)  kubelet          Node no-preload-152932 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node no-preload-152932 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node no-preload-152932 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node no-preload-152932 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node no-preload-152932 event: Registered Node no-preload-152932 in Controller
	  Normal  NodeReady                12s                kubelet          Node no-preload-152932 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.090313] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.571631] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 9 13:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.019189] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023897] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +2.047759] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +4.031604] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +8.447123] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[ +16.382306] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 13:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	
	
	==> etcd [eb554e0fb4c35a559d2cfb4f52a82951cd81a68502c1e7b6abb5b8281b3a1f18] <==
	{"level":"warn","ts":"2025-11-09T14:11:55.803194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.809674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.817666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.824012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.831569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.841046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.848152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.854570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.860747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.866442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.873347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.880422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.887270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.893815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.907322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.918515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.924889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.931315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.944937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.951433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.963562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.970225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.983607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:55.989938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:11:56.042010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52022","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:12:29 up 54 min,  0 user,  load average: 2.82, 2.81, 1.78
	Linux no-preload-152932 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7900799d6a7b88bddb8430bc6f4e69fdeeeab34dc42b715ba90bf6d871538b08] <==
	I1109 14:12:06.779138       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:12:06.779371       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1109 14:12:06.779483       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:12:06.779497       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:12:06.779518       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:12:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:12:06.979276       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:12:06.979349       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:12:06.979368       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:12:06.979876       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1109 14:12:07.280397       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:12:07.280416       1 metrics.go:72] Registering metrics
	I1109 14:12:07.280468       1 controller.go:711] "Syncing nftables rules"
	I1109 14:12:16.979507       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1109 14:12:16.979577       1 main.go:301] handling current node
	I1109 14:12:26.982793       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1109 14:12:26.982828       1 main.go:301] handling current node
	
	
	==> kube-apiserver [31ab6cb262b0a4fd49047766dda9531c4d7a2595c22ac0fa39ccc3aae2eed448] <==
	I1109 14:11:56.493677       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:11:56.497162       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:11:56.497361       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1109 14:11:56.521092       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:11:56.521202       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 14:11:56.529463       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1109 14:11:56.693729       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:11:57.445122       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1109 14:11:57.449779       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1109 14:11:57.449795       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:11:57.845748       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:11:57.878557       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:11:58.000067       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1109 14:11:58.006791       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1109 14:11:58.007844       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:11:58.011421       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:11:58.444329       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:11:58.940495       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:11:58.957719       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1109 14:11:58.963786       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 14:12:03.493508       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:12:04.295136       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:12:04.298240       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:12:04.342123       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1109 14:12:28.149624       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:40658: use of closed network connection
	
	
	==> kube-controller-manager [13f394ec4c85c223dadff24f4a4d2d1dfa06397df14242861d085046dcd8323a] <==
	I1109 14:12:03.440159       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:12:03.440194       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:12:03.440204       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 14:12:03.440176       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1109 14:12:03.441451       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1109 14:12:03.441546       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 14:12:03.441626       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 14:12:03.441664       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 14:12:03.441670       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1109 14:12:03.441695       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1109 14:12:03.441709       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 14:12:03.441723       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-152932"
	I1109 14:12:03.441759       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1109 14:12:03.441774       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1109 14:12:03.441828       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1109 14:12:03.441888       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1109 14:12:03.442063       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 14:12:03.442676       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 14:12:03.444102       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 14:12:03.445568       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 14:12:03.446719       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1109 14:12:03.446764       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:12:03.446805       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1109 14:12:03.469021       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:12:18.459140       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [34aa8de3d8dd11f6d722c29157b208e830427edbe1cfbc656c1e1effb2ddfd95] <==
	I1109 14:12:04.779917       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:12:04.854345       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:12:04.954475       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:12:04.954508       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1109 14:12:04.954668       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:12:04.972834       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:12:04.972877       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:12:04.977701       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:12:04.978025       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:12:04.978053       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:12:04.980163       1 config.go:200] "Starting service config controller"
	I1109 14:12:04.980208       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:12:04.980230       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:12:04.980243       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:12:04.980259       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:12:04.980265       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:12:04.980482       1 config.go:309] "Starting node config controller"
	I1109 14:12:04.980539       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:12:05.081060       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:12:05.081069       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:12:05.081087       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:12:05.081083       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a35c05b752be2a63ac14906154c637cd183928d6febf966756b218e5c54c7e03] <==
	E1109 14:11:56.447995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 14:11:56.448140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1109 14:11:56.448182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 14:11:56.448410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 14:11:56.448428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 14:11:56.448428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 14:11:56.448459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 14:11:56.448475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 14:11:56.448497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 14:11:56.448522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 14:11:56.448599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 14:11:56.448668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 14:11:56.448773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 14:11:56.448830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 14:11:57.251613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 14:11:57.362260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 14:11:57.387387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 14:11:57.401531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 14:11:57.452279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1109 14:11:57.471297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 14:11:57.515383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 14:11:57.525254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 14:11:57.673770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 14:11:57.674414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1109 14:11:59.844184       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:11:59 no-preload-152932 kubelet[2282]: I1109 14:11:59.817434    2282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-152932" podStartSLOduration=1.81741373 podStartE2EDuration="1.81741373s" podCreationTimestamp="2025-11-09 14:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:11:59.808435407 +0000 UTC m=+1.124104459" watchObservedRunningTime="2025-11-09 14:11:59.81741373 +0000 UTC m=+1.133082788"
	Nov 09 14:11:59 no-preload-152932 kubelet[2282]: I1109 14:11:59.828786    2282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-152932" podStartSLOduration=1.828770981 podStartE2EDuration="1.828770981s" podCreationTimestamp="2025-11-09 14:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:11:59.817708951 +0000 UTC m=+1.133378002" watchObservedRunningTime="2025-11-09 14:11:59.828770981 +0000 UTC m=+1.144440033"
	Nov 09 14:11:59 no-preload-152932 kubelet[2282]: I1109 14:11:59.828881    2282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-152932" podStartSLOduration=2.82887623 podStartE2EDuration="2.82887623s" podCreationTimestamp="2025-11-09 14:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:11:59.828849235 +0000 UTC m=+1.144518287" watchObservedRunningTime="2025-11-09 14:11:59.82887623 +0000 UTC m=+1.144545283"
	Nov 09 14:12:03 no-preload-152932 kubelet[2282]: I1109 14:12:03.468771    2282 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 09 14:12:03 no-preload-152932 kubelet[2282]: I1109 14:12:03.469397    2282 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 09 14:12:04 no-preload-152932 kubelet[2282]: I1109 14:12:04.389312    2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4781592d-4e13-43cf-ab43-5dd06b78a71c-xtables-lock\") pod \"kube-proxy-f5tgg\" (UID: \"4781592d-4e13-43cf-ab43-5dd06b78a71c\") " pod="kube-system/kube-proxy-f5tgg"
	Nov 09 14:12:04 no-preload-152932 kubelet[2282]: I1109 14:12:04.389372    2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dee764ff-0946-490a-b98d-dca6e1d60fa2-xtables-lock\") pod \"kindnet-qk599\" (UID: \"dee764ff-0946-490a-b98d-dca6e1d60fa2\") " pod="kube-system/kindnet-qk599"
	Nov 09 14:12:04 no-preload-152932 kubelet[2282]: I1109 14:12:04.389399    2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znxsk\" (UniqueName: \"kubernetes.io/projected/4781592d-4e13-43cf-ab43-5dd06b78a71c-kube-api-access-znxsk\") pod \"kube-proxy-f5tgg\" (UID: \"4781592d-4e13-43cf-ab43-5dd06b78a71c\") " pod="kube-system/kube-proxy-f5tgg"
	Nov 09 14:12:04 no-preload-152932 kubelet[2282]: I1109 14:12:04.389426    2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4781592d-4e13-43cf-ab43-5dd06b78a71c-lib-modules\") pod \"kube-proxy-f5tgg\" (UID: \"4781592d-4e13-43cf-ab43-5dd06b78a71c\") " pod="kube-system/kube-proxy-f5tgg"
	Nov 09 14:12:04 no-preload-152932 kubelet[2282]: I1109 14:12:04.389504    2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dee764ff-0946-490a-b98d-dca6e1d60fa2-cni-cfg\") pod \"kindnet-qk599\" (UID: \"dee764ff-0946-490a-b98d-dca6e1d60fa2\") " pod="kube-system/kindnet-qk599"
	Nov 09 14:12:04 no-preload-152932 kubelet[2282]: I1109 14:12:04.389534    2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dee764ff-0946-490a-b98d-dca6e1d60fa2-lib-modules\") pod \"kindnet-qk599\" (UID: \"dee764ff-0946-490a-b98d-dca6e1d60fa2\") " pod="kube-system/kindnet-qk599"
	Nov 09 14:12:04 no-preload-152932 kubelet[2282]: I1109 14:12:04.389556    2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fljn\" (UniqueName: \"kubernetes.io/projected/dee764ff-0946-490a-b98d-dca6e1d60fa2-kube-api-access-7fljn\") pod \"kindnet-qk599\" (UID: \"dee764ff-0946-490a-b98d-dca6e1d60fa2\") " pod="kube-system/kindnet-qk599"
	Nov 09 14:12:04 no-preload-152932 kubelet[2282]: I1109 14:12:04.389582    2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4781592d-4e13-43cf-ab43-5dd06b78a71c-kube-proxy\") pod \"kube-proxy-f5tgg\" (UID: \"4781592d-4e13-43cf-ab43-5dd06b78a71c\") " pod="kube-system/kube-proxy-f5tgg"
	Nov 09 14:12:04 no-preload-152932 kubelet[2282]: I1109 14:12:04.803917    2282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f5tgg" podStartSLOduration=0.803899187 podStartE2EDuration="803.899187ms" podCreationTimestamp="2025-11-09 14:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:12:04.803733899 +0000 UTC m=+6.119402950" watchObservedRunningTime="2025-11-09 14:12:04.803899187 +0000 UTC m=+6.119568238"
	Nov 09 14:12:06 no-preload-152932 kubelet[2282]: I1109 14:12:06.998379    2282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qk599" podStartSLOduration=1.108528906 podStartE2EDuration="2.998357693s" podCreationTimestamp="2025-11-09 14:12:04 +0000 UTC" firstStartedPulling="2025-11-09 14:12:04.682053006 +0000 UTC m=+5.997722039" lastFinishedPulling="2025-11-09 14:12:06.571881795 +0000 UTC m=+7.887550826" observedRunningTime="2025-11-09 14:12:06.801844048 +0000 UTC m=+8.117513100" watchObservedRunningTime="2025-11-09 14:12:06.998357693 +0000 UTC m=+8.314026752"
	Nov 09 14:12:17 no-preload-152932 kubelet[2282]: I1109 14:12:17.112836    2282 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 09 14:12:17 no-preload-152932 kubelet[2282]: I1109 14:12:17.177089    2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6brjq\" (UniqueName: \"kubernetes.io/projected/e01530eb-50f0-45f2-952e-7efa3499ea36-kube-api-access-6brjq\") pod \"coredns-66bc5c9577-6ssc5\" (UID: \"e01530eb-50f0-45f2-952e-7efa3499ea36\") " pod="kube-system/coredns-66bc5c9577-6ssc5"
	Nov 09 14:12:17 no-preload-152932 kubelet[2282]: I1109 14:12:17.177124    2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3fc6ed1a-8d6d-430a-949e-8023c87830f3-tmp\") pod \"storage-provisioner\" (UID: \"3fc6ed1a-8d6d-430a-949e-8023c87830f3\") " pod="kube-system/storage-provisioner"
	Nov 09 14:12:17 no-preload-152932 kubelet[2282]: I1109 14:12:17.177140    2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxs8v\" (UniqueName: \"kubernetes.io/projected/3fc6ed1a-8d6d-430a-949e-8023c87830f3-kube-api-access-hxs8v\") pod \"storage-provisioner\" (UID: \"3fc6ed1a-8d6d-430a-949e-8023c87830f3\") " pod="kube-system/storage-provisioner"
	Nov 09 14:12:17 no-preload-152932 kubelet[2282]: I1109 14:12:17.177157    2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01530eb-50f0-45f2-952e-7efa3499ea36-config-volume\") pod \"coredns-66bc5c9577-6ssc5\" (UID: \"e01530eb-50f0-45f2-952e-7efa3499ea36\") " pod="kube-system/coredns-66bc5c9577-6ssc5"
	Nov 09 14:12:17 no-preload-152932 kubelet[2282]: I1109 14:12:17.823368    2282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.823349593 podStartE2EDuration="13.823349593s" podCreationTimestamp="2025-11-09 14:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:12:17.823095401 +0000 UTC m=+19.138764645" watchObservedRunningTime="2025-11-09 14:12:17.823349593 +0000 UTC m=+19.139018645"
	Nov 09 14:12:17 no-preload-152932 kubelet[2282]: I1109 14:12:17.831817    2282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6ssc5" podStartSLOduration=13.831799112 podStartE2EDuration="13.831799112s" podCreationTimestamp="2025-11-09 14:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:12:17.831651437 +0000 UTC m=+19.147320476" watchObservedRunningTime="2025-11-09 14:12:17.831799112 +0000 UTC m=+19.147468164"
	Nov 09 14:12:20 no-preload-152932 kubelet[2282]: I1109 14:12:20.092489    2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fcr4\" (UniqueName: \"kubernetes.io/projected/84072b78-3173-4704-8820-d187e9262dd9-kube-api-access-2fcr4\") pod \"busybox\" (UID: \"84072b78-3173-4704-8820-d187e9262dd9\") " pod="default/busybox"
	Nov 09 14:12:21 no-preload-152932 kubelet[2282]: I1109 14:12:21.835683    2282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.0832775510000001 podStartE2EDuration="1.835662569s" podCreationTimestamp="2025-11-09 14:12:20 +0000 UTC" firstStartedPulling="2025-11-09 14:12:20.37260559 +0000 UTC m=+21.688274811" lastFinishedPulling="2025-11-09 14:12:21.124990798 +0000 UTC m=+22.440659829" observedRunningTime="2025-11-09 14:12:21.835535564 +0000 UTC m=+23.151204615" watchObservedRunningTime="2025-11-09 14:12:21.835662569 +0000 UTC m=+23.151331622"
	Nov 09 14:12:28 no-preload-152932 kubelet[2282]: E1109 14:12:28.149546    2282 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:38216->127.0.0.1:46069: write tcp 127.0.0.1:38216->127.0.0.1:46069: write: connection reset by peer
	
	
	==> storage-provisioner [31924232a626f8a31f051bdcdc70d62aa788b11a3a95f3cba3b574d416313497] <==
	I1109 14:12:17.490995       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:12:17.498886       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:12:17.498938       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 14:12:17.500784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:12:17.504550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:12:17.504728       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:12:17.504788       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"080c7d7d-4bc8-4b08-b04d-f039e8b65be0", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-152932_e9457e45-db64-4406-890c-c967d62401f9 became leader
	I1109 14:12:17.504883       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-152932_e9457e45-db64-4406-890c-c967d62401f9!
	W1109 14:12:17.508461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:12:17.511820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:12:17.605779       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-152932_e9457e45-db64-4406-890c-c967d62401f9!
	W1109 14:12:19.514202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:12:19.517748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:12:21.520143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:12:21.523553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:12:23.526199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:12:23.529913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:12:25.532433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:12:25.535848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:12:27.538501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:12:27.541944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:12:29.545438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:12:29.550519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-152932 -n no-preload-152932
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-152932 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-273180 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-273180 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (233.373753ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:13:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-273180 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-273180 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-273180 describe deploy/metrics-server -n kube-system: exit status 1 (58.740225ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-273180 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-273180
helpers_test.go:243: (dbg) docker inspect embed-certs-273180:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7",
	        "Created": "2025-11-09T14:12:40.11425745Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 248501,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:12:40.154323967Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7/hostname",
	        "HostsPath": "/var/lib/docker/containers/da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7/hosts",
	        "LogPath": "/var/lib/docker/containers/da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7/da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7-json.log",
	        "Name": "/embed-certs-273180",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-273180:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-273180",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7",
	                "LowerDir": "/var/lib/docker/overlay2/c9602747b4dc7cd2b99d04208211d49870115e57ef519393f716f3f462a836b0-init/diff:/var/lib/docker/overlay2/d0c6d4b4ffbfb6af64ec3408030fc54111fe30d500088a5ae947d598cfc72f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c9602747b4dc7cd2b99d04208211d49870115e57ef519393f716f3f462a836b0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c9602747b4dc7cd2b99d04208211d49870115e57ef519393f716f3f462a836b0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c9602747b4dc7cd2b99d04208211d49870115e57ef519393f716f3f462a836b0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-273180",
	                "Source": "/var/lib/docker/volumes/embed-certs-273180/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-273180",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-273180",
	                "name.minikube.sigs.k8s.io": "embed-certs-273180",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1548f4de5ed5f50d44fd4c3edfa2e670d3d94f1024833a008179ae7833954a5",
	            "SandboxKey": "/var/run/docker/netns/a1548f4de5ed",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-273180": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:e9:d1:b1:19:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e4394163f33a23d3fe460b68d1b70efd91c45ded0aedfe59220d7876ad042ed",
	                    "EndpointID": "bd76aa8b61ebb51b515e151e2b70b88a50f3e2862c8e9107bc33a3a8e05a18f2",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-273180",
	                        "da002f6826ef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-273180 -n embed-certs-273180
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-273180 logs -n 25
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ cert-options-350702 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-350702          │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ ssh     │ -p cert-options-350702 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-350702          │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ delete  │ -p cert-options-350702                                                                                                                                                                                                                        │ cert-options-350702          │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ start   │ -p pause-092489 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-092489                 │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ start   │ -p old-k8s-version-169816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:12 UTC │
	│ pause   │ -p pause-092489 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-092489                 │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │                     │
	│ delete  │ -p pause-092489                                                                                                                                                                                                                               │ pause-092489                 │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ start   │ -p no-preload-152932 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:12 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-169816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │                     │
	│ stop    │ -p old-k8s-version-169816 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p cert-expiration-883873 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-883873       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-152932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-169816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p old-k8s-version-169816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:13 UTC │
	│ stop    │ -p no-preload-152932 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ delete  │ -p cert-expiration-883873                                                                                                                                                                                                                     │ cert-expiration-883873       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p embed-certs-273180 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:13 UTC │
	│ addons  │ enable dashboard -p no-preload-152932 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p no-preload-152932 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │                     │
	│ start   │ -p kubernetes-upgrade-755159 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-755159    │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ start   │ -p kubernetes-upgrade-755159 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-755159    │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p kubernetes-upgrade-755159                                                                                                                                                                                                                  │ kubernetes-upgrade-755159    │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p disable-driver-mounts-565545                                                                                                                                                                                                               │ disable-driver-mounts-565545 │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p default-k8s-diff-port-326524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-273180 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:13:17
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:13:17.185719  256773 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:13:17.185955  256773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:13:17.185962  256773 out.go:374] Setting ErrFile to fd 2...
	I1109 14:13:17.185966  256773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:13:17.186158  256773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:13:17.186568  256773 out.go:368] Setting JSON to false
	I1109 14:13:17.187686  256773 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3347,"bootTime":1762694250,"procs":356,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:13:17.187762  256773 start.go:143] virtualization: kvm guest
	I1109 14:13:17.189520  256773 out.go:179] * [default-k8s-diff-port-326524] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:13:17.191078  256773 notify.go:221] Checking for updates...
	I1109 14:13:17.191098  256773 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:13:17.192262  256773 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:13:17.193437  256773 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:13:17.194507  256773 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 14:13:17.195596  256773 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:13:17.196680  256773 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:13:17.198200  256773 config.go:182] Loaded profile config "embed-certs-273180": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:13:17.198297  256773 config.go:182] Loaded profile config "no-preload-152932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:13:17.198363  256773 config.go:182] Loaded profile config "old-k8s-version-169816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1109 14:13:17.198433  256773 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:13:17.223091  256773 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 14:13:17.223203  256773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:13:17.287312  256773 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-09 14:13:17.275776886 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:13:17.287415  256773 docker.go:319] overlay module found
	I1109 14:13:17.289116  256773 out.go:179] * Using the docker driver based on user configuration
	I1109 14:13:17.290229  256773 start.go:309] selected driver: docker
	I1109 14:13:17.290242  256773 start.go:930] validating driver "docker" against <nil>
	I1109 14:13:17.290253  256773 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:13:17.290851  256773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:13:17.343940  256773 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-09 14:13:17.334239049 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:13:17.344115  256773 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 14:13:17.344374  256773 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:13:17.345927  256773 out.go:179] * Using Docker driver with root privileges
	I1109 14:13:17.347106  256773 cni.go:84] Creating CNI manager for ""
	I1109 14:13:17.347162  256773 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:13:17.347171  256773 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 14:13:17.347232  256773 start.go:353] cluster config:
	{Name:default-k8s-diff-port-326524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-326524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:13:17.348420  256773 out.go:179] * Starting "default-k8s-diff-port-326524" primary control-plane node in "default-k8s-diff-port-326524" cluster
	I1109 14:13:17.349353  256773 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:13:17.350328  256773 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:13:17.351300  256773 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:13:17.351333  256773 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 14:13:17.351341  256773 cache.go:65] Caching tarball of preloaded images
	I1109 14:13:17.351378  256773 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:13:17.351457  256773 preload.go:238] Found /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 14:13:17.351473  256773 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:13:17.351571  256773 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/config.json ...
	I1109 14:13:17.351595  256773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/config.json: {Name:mk6fab699afd6d53f2fdcb141a735fa8da65c44d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:17.370665  256773 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:13:17.370687  256773 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:13:17.370711  256773 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:13:17.370737  256773 start.go:360] acquireMachinesLock for default-k8s-diff-port-326524: {Name:mk380b0156a652cb7885053d4cba5ab348316819 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:13:17.370865  256773 start.go:364] duration metric: took 106.738µs to acquireMachinesLock for "default-k8s-diff-port-326524"
	I1109 14:13:17.370892  256773 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-326524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-326524 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:13:17.370981  256773 start.go:125] createHost starting for "" (driver="docker")
	W1109 14:13:15.652225  246717 node_ready.go:57] node "embed-certs-273180" has "Ready":"False" status (will retry)
	I1109 14:13:16.152405  246717 node_ready.go:49] node "embed-certs-273180" is "Ready"
	I1109 14:13:16.152430  246717 node_ready.go:38] duration metric: took 12.503394947s for node "embed-certs-273180" to be "Ready" ...
	I1109 14:13:16.152444  246717 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:13:16.152482  246717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:13:16.165139  246717 api_server.go:72] duration metric: took 12.905051503s to wait for apiserver process to appear ...
	I1109 14:13:16.165160  246717 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:13:16.165174  246717 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1109 14:13:16.169192  246717 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1109 14:13:16.170050  246717 api_server.go:141] control plane version: v1.34.1
	I1109 14:13:16.170071  246717 api_server.go:131] duration metric: took 4.906156ms to wait for apiserver health ...
	I1109 14:13:16.170079  246717 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:13:16.174346  246717 system_pods.go:59] 8 kube-system pods found
	I1109 14:13:16.174385  246717 system_pods.go:61] "coredns-66bc5c9577-bbnm4" [b6f42679-62a3-4b25-9119-c08fe6b07c0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:13:16.174393  246717 system_pods.go:61] "etcd-embed-certs-273180" [bbb903eb-c06b-4c1e-948e-3f5db3af34b4] Running
	I1109 14:13:16.174402  246717 system_pods.go:61] "kindnet-scgq8" [5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba] Running
	I1109 14:13:16.174413  246717 system_pods.go:61] "kube-apiserver-embed-certs-273180" [0f55bcc7-9c38-4bb2-96b6-ff2012e9d407] Running
	I1109 14:13:16.174419  246717 system_pods.go:61] "kube-controller-manager-embed-certs-273180" [c03a3fa3-7a12-47a2-b628-eae16a25cfb8] Running
	I1109 14:13:16.174424  246717 system_pods.go:61] "kube-proxy-k6zsl" [aa0ed3ae-34a8-4368-8e1c-385033e46f0e] Running
	I1109 14:13:16.174429  246717 system_pods.go:61] "kube-scheduler-embed-certs-273180" [68f3160d-05e0-491f-8254-44379226803a] Running
	I1109 14:13:16.174436  246717 system_pods.go:61] "storage-provisioner" [d9104f3d-417a-49dc-86ba-af31925458bb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:13:16.174447  246717 system_pods.go:74] duration metric: took 4.362755ms to wait for pod list to return data ...
	I1109 14:13:16.174463  246717 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:13:16.176894  246717 default_sa.go:45] found service account: "default"
	I1109 14:13:16.176922  246717 default_sa.go:55] duration metric: took 2.451484ms for default service account to be created ...
	I1109 14:13:16.176931  246717 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:13:16.181020  246717 system_pods.go:86] 8 kube-system pods found
	I1109 14:13:16.181045  246717 system_pods.go:89] "coredns-66bc5c9577-bbnm4" [b6f42679-62a3-4b25-9119-c08fe6b07c0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:13:16.181052  246717 system_pods.go:89] "etcd-embed-certs-273180" [bbb903eb-c06b-4c1e-948e-3f5db3af34b4] Running
	I1109 14:13:16.181060  246717 system_pods.go:89] "kindnet-scgq8" [5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba] Running
	I1109 14:13:16.181064  246717 system_pods.go:89] "kube-apiserver-embed-certs-273180" [0f55bcc7-9c38-4bb2-96b6-ff2012e9d407] Running
	I1109 14:13:16.181071  246717 system_pods.go:89] "kube-controller-manager-embed-certs-273180" [c03a3fa3-7a12-47a2-b628-eae16a25cfb8] Running
	I1109 14:13:16.181075  246717 system_pods.go:89] "kube-proxy-k6zsl" [aa0ed3ae-34a8-4368-8e1c-385033e46f0e] Running
	I1109 14:13:16.181081  246717 system_pods.go:89] "kube-scheduler-embed-certs-273180" [68f3160d-05e0-491f-8254-44379226803a] Running
	I1109 14:13:16.181093  246717 system_pods.go:89] "storage-provisioner" [d9104f3d-417a-49dc-86ba-af31925458bb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:13:16.181114  246717 retry.go:31] will retry after 197.947699ms: missing components: kube-dns
	I1109 14:13:16.384079  246717 system_pods.go:86] 8 kube-system pods found
	I1109 14:13:16.384113  246717 system_pods.go:89] "coredns-66bc5c9577-bbnm4" [b6f42679-62a3-4b25-9119-c08fe6b07c0c] Running
	I1109 14:13:16.384119  246717 system_pods.go:89] "etcd-embed-certs-273180" [bbb903eb-c06b-4c1e-948e-3f5db3af34b4] Running
	I1109 14:13:16.384123  246717 system_pods.go:89] "kindnet-scgq8" [5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba] Running
	I1109 14:13:16.384127  246717 system_pods.go:89] "kube-apiserver-embed-certs-273180" [0f55bcc7-9c38-4bb2-96b6-ff2012e9d407] Running
	I1109 14:13:16.384134  246717 system_pods.go:89] "kube-controller-manager-embed-certs-273180" [c03a3fa3-7a12-47a2-b628-eae16a25cfb8] Running
	I1109 14:13:16.384143  246717 system_pods.go:89] "kube-proxy-k6zsl" [aa0ed3ae-34a8-4368-8e1c-385033e46f0e] Running
	I1109 14:13:16.384148  246717 system_pods.go:89] "kube-scheduler-embed-certs-273180" [68f3160d-05e0-491f-8254-44379226803a] Running
	I1109 14:13:16.384153  246717 system_pods.go:89] "storage-provisioner" [d9104f3d-417a-49dc-86ba-af31925458bb] Running
	I1109 14:13:16.384163  246717 system_pods.go:126] duration metric: took 207.224839ms to wait for k8s-apps to be running ...
	I1109 14:13:16.384180  246717 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:13:16.384240  246717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:13:16.399990  246717 system_svc.go:56] duration metric: took 15.781627ms WaitForService to wait for kubelet
	I1109 14:13:16.400025  246717 kubeadm.go:587] duration metric: took 13.139938623s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:13:16.400050  246717 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:13:16.403106  246717 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:13:16.403135  246717 node_conditions.go:123] node cpu capacity is 8
	I1109 14:13:16.403153  246717 node_conditions.go:105] duration metric: took 3.089934ms to run NodePressure ...
	I1109 14:13:16.403168  246717 start.go:242] waiting for startup goroutines ...
	I1109 14:13:16.403182  246717 start.go:247] waiting for cluster config update ...
	I1109 14:13:16.403195  246717 start.go:256] writing updated cluster config ...
	I1109 14:13:16.403401  246717 ssh_runner.go:195] Run: rm -f paused
	I1109 14:13:16.407239  246717 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:13:16.410710  246717 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bbnm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.414784  246717 pod_ready.go:94] pod "coredns-66bc5c9577-bbnm4" is "Ready"
	I1109 14:13:16.414802  246717 pod_ready.go:86] duration metric: took 4.066203ms for pod "coredns-66bc5c9577-bbnm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.416531  246717 pod_ready.go:83] waiting for pod "etcd-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.420036  246717 pod_ready.go:94] pod "etcd-embed-certs-273180" is "Ready"
	I1109 14:13:16.420052  246717 pod_ready.go:86] duration metric: took 3.498205ms for pod "etcd-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.422008  246717 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.425498  246717 pod_ready.go:94] pod "kube-apiserver-embed-certs-273180" is "Ready"
	I1109 14:13:16.425518  246717 pod_ready.go:86] duration metric: took 3.492681ms for pod "kube-apiserver-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.427143  246717 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.811543  246717 pod_ready.go:94] pod "kube-controller-manager-embed-certs-273180" is "Ready"
	I1109 14:13:16.811564  246717 pod_ready.go:86] duration metric: took 384.404326ms for pod "kube-controller-manager-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:17.011653  246717 pod_ready.go:83] waiting for pod "kube-proxy-k6zsl" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:17.412029  246717 pod_ready.go:94] pod "kube-proxy-k6zsl" is "Ready"
	I1109 14:13:17.412057  246717 pod_ready.go:86] duration metric: took 400.379485ms for pod "kube-proxy-k6zsl" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:17.612189  246717 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:18.011980  246717 pod_ready.go:94] pod "kube-scheduler-embed-certs-273180" is "Ready"
	I1109 14:13:18.012004  246717 pod_ready.go:86] duration metric: took 399.78913ms for pod "kube-scheduler-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:18.012015  246717 pod_ready.go:40] duration metric: took 1.604746997s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:13:18.062408  246717 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:13:18.063827  246717 out.go:179] * Done! kubectl is now configured to use "embed-certs-273180" cluster and "default" namespace by default
	W1109 14:13:14.734612  243958 pod_ready.go:104] pod "coredns-5dd5756b68-5bgfs" is not "Ready", error: <nil>
	W1109 14:13:16.735051  243958 pod_ready.go:104] pod "coredns-5dd5756b68-5bgfs" is not "Ready", error: <nil>
	W1109 14:13:15.269842  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	W1109 14:13:17.273879  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	I1109 14:13:19.234767  243958 pod_ready.go:94] pod "coredns-5dd5756b68-5bgfs" is "Ready"
	I1109 14:13:19.234799  243958 pod_ready.go:86] duration metric: took 38.505686458s for pod "coredns-5dd5756b68-5bgfs" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.237750  243958 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.242233  243958 pod_ready.go:94] pod "etcd-old-k8s-version-169816" is "Ready"
	I1109 14:13:19.242258  243958 pod_ready.go:86] duration metric: took 4.482172ms for pod "etcd-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.245330  243958 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.249728  243958 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-169816" is "Ready"
	I1109 14:13:19.249746  243958 pod_ready.go:86] duration metric: took 4.394681ms for pod "kube-apiserver-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.252151  243958 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.433101  243958 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-169816" is "Ready"
	I1109 14:13:19.433127  243958 pod_ready.go:86] duration metric: took 180.958702ms for pod "kube-controller-manager-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.634032  243958 pod_ready.go:83] waiting for pod "kube-proxy-96xbm" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:20.033559  243958 pod_ready.go:94] pod "kube-proxy-96xbm" is "Ready"
	I1109 14:13:20.033591  243958 pod_ready.go:86] duration metric: took 399.53199ms for pod "kube-proxy-96xbm" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:20.233541  243958 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:20.632834  243958 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-169816" is "Ready"
	I1109 14:13:20.632865  243958 pod_ready.go:86] duration metric: took 399.296239ms for pod "kube-scheduler-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:20.632880  243958 pod_ready.go:40] duration metric: took 39.910042807s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:13:20.676081  243958 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1109 14:13:20.700243  243958 out.go:203] 
	W1109 14:13:20.702346  243958 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1109 14:13:20.704036  243958 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1109 14:13:20.709285  243958 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-169816" cluster and "default" namespace by default
	I1109 14:13:17.373015  256773 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1109 14:13:17.373227  256773 start.go:159] libmachine.API.Create for "default-k8s-diff-port-326524" (driver="docker")
	I1109 14:13:17.373253  256773 client.go:173] LocalClient.Create starting
	I1109 14:13:17.373339  256773 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem
	I1109 14:13:17.373379  256773 main.go:143] libmachine: Decoding PEM data...
	I1109 14:13:17.373402  256773 main.go:143] libmachine: Parsing certificate...
	I1109 14:13:17.373471  256773 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem
	I1109 14:13:17.373499  256773 main.go:143] libmachine: Decoding PEM data...
	I1109 14:13:17.373516  256773 main.go:143] libmachine: Parsing certificate...
	I1109 14:13:17.373944  256773 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-326524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 14:13:17.390599  256773 cli_runner.go:211] docker network inspect default-k8s-diff-port-326524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 14:13:17.390692  256773 network_create.go:284] running [docker network inspect default-k8s-diff-port-326524] to gather additional debugging logs...
	I1109 14:13:17.390717  256773 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-326524
	W1109 14:13:17.406825  256773 cli_runner.go:211] docker network inspect default-k8s-diff-port-326524 returned with exit code 1
	I1109 14:13:17.406849  256773 network_create.go:287] error running [docker network inspect default-k8s-diff-port-326524]: docker network inspect default-k8s-diff-port-326524: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-326524 not found
	I1109 14:13:17.406863  256773 network_create.go:289] output of [docker network inspect default-k8s-diff-port-326524]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-326524 not found
	
	** /stderr **
	I1109 14:13:17.406974  256773 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:13:17.424550  256773 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c085420cd01a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:dd:1a:ae:de:73} reservation:<nil>}
	I1109 14:13:17.425251  256773 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-227a1511ff5a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4a:00:84:88:a9:17} reservation:<nil>}
	I1109 14:13:17.425985  256773 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9196665a99b4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:6c:f7:d0:28:4f} reservation:<nil>}
	I1109 14:13:17.426428  256773 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f0ef03f929b3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e6:cd:f4:b2:ad:24} reservation:<nil>}
	I1109 14:13:17.427179  256773 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f05c30}
	I1109 14:13:17.427204  256773 network_create.go:124] attempt to create docker network default-k8s-diff-port-326524 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1109 14:13:17.427254  256773 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-326524 default-k8s-diff-port-326524
	I1109 14:13:17.482890  256773 network_create.go:108] docker network default-k8s-diff-port-326524 192.168.85.0/24 created
	I1109 14:13:17.482919  256773 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-326524" container
	I1109 14:13:17.482985  256773 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 14:13:17.499994  256773 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-326524 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-326524 --label created_by.minikube.sigs.k8s.io=true
	I1109 14:13:17.516895  256773 oci.go:103] Successfully created a docker volume default-k8s-diff-port-326524
	I1109 14:13:17.516975  256773 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-326524-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-326524 --entrypoint /usr/bin/test -v default-k8s-diff-port-326524:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 14:13:17.902500  256773 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-326524
	I1109 14:13:17.902548  256773 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:13:17.902557  256773 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 14:13:17.902632  256773 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-326524:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1109 14:13:19.770606  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	W1109 14:13:21.877241  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	W1109 14:13:24.270217  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 09 14:13:16 embed-certs-273180 crio[776]: time="2025-11-09T14:13:16.026521396Z" level=info msg="Started container" PID=1840 containerID=89b1c2554e91760e1d5aba2c5730e782778b6ea65bf95cb49ffd6de01f600df2 description=kube-system/storage-provisioner/storage-provisioner id=871dd822-d08a-4cdc-8b9a-8d7785aa4dad name=/runtime.v1.RuntimeService/StartContainer sandboxID=f6fc3084022b883bfc4ab53428e10e7608132b456beac8b3db51027eb953d7dc
	Nov 09 14:13:16 embed-certs-273180 crio[776]: time="2025-11-09T14:13:16.02831774Z" level=info msg="Started container" PID=1843 containerID=bb47721ef8e19030e354324cf1627db55a5cd203e48a88898c1b9851077c5c83 description=kube-system/coredns-66bc5c9577-bbnm4/coredns id=0d92423f-d351-4a0e-9800-8746416ed3cd name=/runtime.v1.RuntimeService/StartContainer sandboxID=512f41ab056c315babd526b1aa9fdc3d0b8b61e723b1e8792cfd140d94ae1ebe
	Nov 09 14:13:18 embed-certs-273180 crio[776]: time="2025-11-09T14:13:18.524283794Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4940f5f3-8902-4fac-bd04-a03f7150c5f0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:13:18 embed-certs-273180 crio[776]: time="2025-11-09T14:13:18.524370703Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:18 embed-certs-273180 crio[776]: time="2025-11-09T14:13:18.529232563Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:91dc0875a0ad8eaa6d71d4752127fa47df7c82c2717f673a8c74c1b6305b5311 UID:e136284b-ac76-4b4f-ba01-633f83baa0e8 NetNS:/var/run/netns/a5c937b0-915f-403d-8d00-07fb37258c3e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ad38}] Aliases:map[]}"
	Nov 09 14:13:18 embed-certs-273180 crio[776]: time="2025-11-09T14:13:18.529322033Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 09 14:13:18 embed-certs-273180 crio[776]: time="2025-11-09T14:13:18.539814569Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:91dc0875a0ad8eaa6d71d4752127fa47df7c82c2717f673a8c74c1b6305b5311 UID:e136284b-ac76-4b4f-ba01-633f83baa0e8 NetNS:/var/run/netns/a5c937b0-915f-403d-8d00-07fb37258c3e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ad38}] Aliases:map[]}"
	Nov 09 14:13:18 embed-certs-273180 crio[776]: time="2025-11-09T14:13:18.539978261Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 09 14:13:18 embed-certs-273180 crio[776]: time="2025-11-09T14:13:18.540863401Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 09 14:13:18 embed-certs-273180 crio[776]: time="2025-11-09T14:13:18.541969819Z" level=info msg="Ran pod sandbox 91dc0875a0ad8eaa6d71d4752127fa47df7c82c2717f673a8c74c1b6305b5311 with infra container: default/busybox/POD" id=4940f5f3-8902-4fac-bd04-a03f7150c5f0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:13:18 embed-certs-273180 crio[776]: time="2025-11-09T14:13:18.543138567Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=151f0be2-4085-4ccf-9d88-b7acba08b630 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:18 embed-certs-273180 crio[776]: time="2025-11-09T14:13:18.543248948Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=151f0be2-4085-4ccf-9d88-b7acba08b630 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:18 embed-certs-273180 crio[776]: time="2025-11-09T14:13:18.543282285Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=151f0be2-4085-4ccf-9d88-b7acba08b630 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:18 embed-certs-273180 crio[776]: time="2025-11-09T14:13:18.544010505Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9e8fac2a-72e5-4ec1-a2d7-d618aed9e5c6 name=/runtime.v1.ImageService/PullImage
	Nov 09 14:13:18 embed-certs-273180 crio[776]: time="2025-11-09T14:13:18.547525574Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 09 14:13:19 embed-certs-273180 crio[776]: time="2025-11-09T14:13:19.21397774Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=9e8fac2a-72e5-4ec1-a2d7-d618aed9e5c6 name=/runtime.v1.ImageService/PullImage
	Nov 09 14:13:19 embed-certs-273180 crio[776]: time="2025-11-09T14:13:19.214711803Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6e906602-0f2c-4981-8c90-ecc0940f94cd name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:19 embed-certs-273180 crio[776]: time="2025-11-09T14:13:19.216029302Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1e137883-dc4a-445e-8eb7-3bc6b7067b66 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:19 embed-certs-273180 crio[776]: time="2025-11-09T14:13:19.219290588Z" level=info msg="Creating container: default/busybox/busybox" id=a2ae5b8c-b967-43ec-94d9-a8625478fca9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:13:19 embed-certs-273180 crio[776]: time="2025-11-09T14:13:19.219414901Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:19 embed-certs-273180 crio[776]: time="2025-11-09T14:13:19.224088587Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:19 embed-certs-273180 crio[776]: time="2025-11-09T14:13:19.224676339Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:19 embed-certs-273180 crio[776]: time="2025-11-09T14:13:19.25361439Z" level=info msg="Created container 53a579a85d2d10b1a739ef800598335c1dc6dd52a7217af59ee482bf1e330403: default/busybox/busybox" id=a2ae5b8c-b967-43ec-94d9-a8625478fca9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:13:19 embed-certs-273180 crio[776]: time="2025-11-09T14:13:19.254235828Z" level=info msg="Starting container: 53a579a85d2d10b1a739ef800598335c1dc6dd52a7217af59ee482bf1e330403" id=71abb975-5793-41ef-a923-ec7dbb55ea66 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:13:19 embed-certs-273180 crio[776]: time="2025-11-09T14:13:19.256032048Z" level=info msg="Started container" PID=1920 containerID=53a579a85d2d10b1a739ef800598335c1dc6dd52a7217af59ee482bf1e330403 description=default/busybox/busybox id=71abb975-5793-41ef-a923-ec7dbb55ea66 name=/runtime.v1.RuntimeService/StartContainer sandboxID=91dc0875a0ad8eaa6d71d4752127fa47df7c82c2717f673a8c74c1b6305b5311
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	53a579a85d2d1       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   91dc0875a0ad8       busybox                                      default
	bb47721ef8e19       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   512f41ab056c3       coredns-66bc5c9577-bbnm4                     kube-system
	89b1c2554e917       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   f6fc3084022b8       storage-provisioner                          kube-system
	0bc66766e63ea       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      22 seconds ago      Running             kindnet-cni               0                   105dccc8192cf       kindnet-scgq8                                kube-system
	3420ce4826b70       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      22 seconds ago      Running             kube-proxy                0                   d5faafc6f17b9       kube-proxy-k6zsl                             kube-system
	11157c82da59a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   e899118671484       kube-apiserver-embed-certs-273180            kube-system
	fc0b35b631404       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   99f34b9c9c2cc       kube-controller-manager-embed-certs-273180   kube-system
	3227159c3c906       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   c17f3dd0f4c40       etcd-embed-certs-273180                      kube-system
	9ba2434518c8f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   b9857d9586c16       kube-scheduler-embed-certs-273180            kube-system
	
	
	==> coredns [bb47721ef8e19030e354324cf1627db55a5cd203e48a88898c1b9851077c5c83] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48486 - 28362 "HINFO IN 1385585574307749667.470891577896137928. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.893210807s
	
	
	==> describe nodes <==
	Name:               embed-certs-273180
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-273180
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=embed-certs-273180
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_12_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:12:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-273180
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:13:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:13:15 +0000   Sun, 09 Nov 2025 14:12:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:13:15 +0000   Sun, 09 Nov 2025 14:12:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:13:15 +0000   Sun, 09 Nov 2025 14:12:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:13:15 +0000   Sun, 09 Nov 2025 14:13:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-273180
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                ca6fdff2-5006-4b63-a78c-0c296485de58
	  Boot ID:                    f61b57f8-a461-4dd6-b921-37a11bd88f0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-bbnm4                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-embed-certs-273180                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-scgq8                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-273180             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-embed-certs-273180    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-k6zsl                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-273180             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node embed-certs-273180 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node embed-certs-273180 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node embed-certs-273180 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node embed-certs-273180 event: Registered Node embed-certs-273180 in Controller
	  Normal  NodeReady                12s   kubelet          Node embed-certs-273180 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.090313] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.571631] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 9 13:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.019189] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023897] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +2.047759] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +4.031604] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +8.447123] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[ +16.382306] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 13:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	
	
	==> etcd [3227159c3c90621328b621db321d44cf86c37059a906526d24c53147a347af30] <==
	{"level":"warn","ts":"2025-11-09T14:12:54.476617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.485840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.492967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.508927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.523553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.530559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.536977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.544249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.550253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.557095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.568774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.577808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.587627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.594572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.601698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.607804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.622491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.629890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.636172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.643963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.650771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.667483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.676197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.684258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:54.752970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47548","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:13:27 up 55 min,  0 user,  load average: 3.02, 2.86, 1.86
	Linux embed-certs-273180 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0bc66766e63eac18f68891cd523f7a8b5310dac0696569665bd4a3092fc2c215] <==
	I1109 14:13:05.117991       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:13:05.118264       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1109 14:13:05.118403       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:13:05.118423       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:13:05.118449       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:13:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:13:05.416110       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:13:05.416144       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:13:05.416156       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:13:05.416521       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1109 14:13:05.716333       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:13:05.716363       1 metrics.go:72] Registering metrics
	I1109 14:13:05.716447       1 controller.go:711] "Syncing nftables rules"
	I1109 14:13:15.414878       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1109 14:13:15.414932       1 main.go:301] handling current node
	I1109 14:13:25.418276       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1109 14:13:25.418305       1 main.go:301] handling current node
	
	
	==> kube-apiserver [11157c82da59a255133e426547a60c023467123cca9bae8ea0abff08b08d0c53] <==
	E1109 14:12:55.420138       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1109 14:12:55.462557       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:12:55.475841       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1109 14:12:55.475888       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:12:55.482530       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:12:55.482586       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 14:12:55.550005       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:12:56.268315       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1109 14:12:56.273144       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1109 14:12:56.273163       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:12:56.868459       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:12:56.914486       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:12:56.969841       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1109 14:12:56.979139       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1109 14:12:56.980219       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:12:56.985143       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:12:57.309191       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:12:58.183893       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:12:58.193680       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1109 14:12:58.201059       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 14:13:02.962105       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:13:02.971637       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:13:03.060073       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1109 14:13:03.370384       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1109 14:13:26.318994       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:54852: use of closed network connection
	
	
	==> kube-controller-manager [fc0b35b6314046874eee8ec1ef1d2c7fa74a2e7d5c1a8f0c7d20ebcb089be618] <==
	I1109 14:13:02.267851       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1109 14:13:02.278069       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1109 14:13:02.295329       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 14:13:02.304497       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1109 14:13:02.305660       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 14:13:02.305680       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1109 14:13:02.306876       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1109 14:13:02.306891       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 14:13:02.306914       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1109 14:13:02.306918       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 14:13:02.306976       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1109 14:13:02.307035       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 14:13:02.307569       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 14:13:02.307675       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1109 14:13:02.308487       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1109 14:13:02.309674       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1109 14:13:02.309775       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1109 14:13:02.309857       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1109 14:13:02.312103       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 14:13:02.313312       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1109 14:13:02.313333       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 14:13:02.315458       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:13:02.322751       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1109 14:13:02.329336       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:13:17.258967       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3420ce4826b70c78bd4f2ff2722fd834f814cf615c642c4df07401ccecfe62ce] <==
	I1109 14:13:04.982688       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:13:05.071854       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:13:05.172614       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:13:05.172680       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1109 14:13:05.172798       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:13:05.197046       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:13:05.197127       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:13:05.204079       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:13:05.204885       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:13:05.204926       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:13:05.206664       1 config.go:200] "Starting service config controller"
	I1109 14:13:05.206723       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:13:05.206754       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:13:05.206759       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:13:05.206772       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:13:05.206777       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:13:05.206917       1 config.go:309] "Starting node config controller"
	I1109 14:13:05.206930       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:13:05.306875       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:13:05.306904       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:13:05.306945       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1109 14:13:05.306990       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [9ba2434518c8fb608055632f3c312f56291eea3180f607a22e4b0af06e8f947f] <==
	E1109 14:12:55.597575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 14:12:55.597731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 14:12:55.597856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 14:12:55.597954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 14:12:55.599665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 14:12:55.599920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 14:12:55.599926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 14:12:55.600072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 14:12:55.600100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 14:12:55.600184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 14:12:55.600263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 14:12:55.600306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 14:12:55.600131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 14:12:55.600394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 14:12:55.600415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 14:12:55.600426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 14:12:55.600478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 14:12:55.600534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 14:12:56.469350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 14:12:56.485953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 14:12:56.548459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 14:12:56.585354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 14:12:56.615871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 14:12:56.634531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1109 14:12:58.595159       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:13:03 embed-certs-273180 kubelet[1319]: E1109 14:13:03.085001    1319 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-k6zsl\" is forbidden: User \"system:node:embed-certs-273180\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-273180' and this object" podUID="aa0ed3ae-34a8-4368-8e1c-385033e46f0e" pod="kube-system/kube-proxy-k6zsl"
	Nov 09 14:13:03 embed-certs-273180 kubelet[1319]: I1109 14:13:03.147111    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba-cni-cfg\") pod \"kindnet-scgq8\" (UID: \"5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba\") " pod="kube-system/kindnet-scgq8"
	Nov 09 14:13:03 embed-certs-273180 kubelet[1319]: I1109 14:13:03.147158    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g62l\" (UniqueName: \"kubernetes.io/projected/5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba-kube-api-access-6g62l\") pod \"kindnet-scgq8\" (UID: \"5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba\") " pod="kube-system/kindnet-scgq8"
	Nov 09 14:13:03 embed-certs-273180 kubelet[1319]: I1109 14:13:03.147188    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhqn7\" (UniqueName: \"kubernetes.io/projected/aa0ed3ae-34a8-4368-8e1c-385033e46f0e-kube-api-access-hhqn7\") pod \"kube-proxy-k6zsl\" (UID: \"aa0ed3ae-34a8-4368-8e1c-385033e46f0e\") " pod="kube-system/kube-proxy-k6zsl"
	Nov 09 14:13:03 embed-certs-273180 kubelet[1319]: I1109 14:13:03.147216    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba-xtables-lock\") pod \"kindnet-scgq8\" (UID: \"5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba\") " pod="kube-system/kindnet-scgq8"
	Nov 09 14:13:03 embed-certs-273180 kubelet[1319]: I1109 14:13:03.147240    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa0ed3ae-34a8-4368-8e1c-385033e46f0e-xtables-lock\") pod \"kube-proxy-k6zsl\" (UID: \"aa0ed3ae-34a8-4368-8e1c-385033e46f0e\") " pod="kube-system/kube-proxy-k6zsl"
	Nov 09 14:13:03 embed-certs-273180 kubelet[1319]: I1109 14:13:03.147315    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa0ed3ae-34a8-4368-8e1c-385033e46f0e-lib-modules\") pod \"kube-proxy-k6zsl\" (UID: \"aa0ed3ae-34a8-4368-8e1c-385033e46f0e\") " pod="kube-system/kube-proxy-k6zsl"
	Nov 09 14:13:03 embed-certs-273180 kubelet[1319]: I1109 14:13:03.147356    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba-lib-modules\") pod \"kindnet-scgq8\" (UID: \"5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba\") " pod="kube-system/kindnet-scgq8"
	Nov 09 14:13:03 embed-certs-273180 kubelet[1319]: I1109 14:13:03.147396    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa0ed3ae-34a8-4368-8e1c-385033e46f0e-kube-proxy\") pod \"kube-proxy-k6zsl\" (UID: \"aa0ed3ae-34a8-4368-8e1c-385033e46f0e\") " pod="kube-system/kube-proxy-k6zsl"
	Nov 09 14:13:04 embed-certs-273180 kubelet[1319]: E1109 14:13:04.255228    1319 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 09 14:13:04 embed-certs-273180 kubelet[1319]: E1109 14:13:04.255268    1319 projected.go:196] Error preparing data for projected volume kube-api-access-6g62l for pod kube-system/kindnet-scgq8: failed to sync configmap cache: timed out waiting for the condition
	Nov 09 14:13:04 embed-certs-273180 kubelet[1319]: E1109 14:13:04.255368    1319 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba-kube-api-access-6g62l podName:5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba nodeName:}" failed. No retries permitted until 2025-11-09 14:13:04.755336251 +0000 UTC m=+6.741130053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6g62l" (UniqueName: "kubernetes.io/projected/5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba-kube-api-access-6g62l") pod "kindnet-scgq8" (UID: "5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba") : failed to sync configmap cache: timed out waiting for the condition
	Nov 09 14:13:04 embed-certs-273180 kubelet[1319]: E1109 14:13:04.257350    1319 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 09 14:13:04 embed-certs-273180 kubelet[1319]: E1109 14:13:04.257383    1319 projected.go:196] Error preparing data for projected volume kube-api-access-hhqn7 for pod kube-system/kube-proxy-k6zsl: failed to sync configmap cache: timed out waiting for the condition
	Nov 09 14:13:04 embed-certs-273180 kubelet[1319]: E1109 14:13:04.257463    1319 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aa0ed3ae-34a8-4368-8e1c-385033e46f0e-kube-api-access-hhqn7 podName:aa0ed3ae-34a8-4368-8e1c-385033e46f0e nodeName:}" failed. No retries permitted until 2025-11-09 14:13:04.757443019 +0000 UTC m=+6.743236838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hhqn7" (UniqueName: "kubernetes.io/projected/aa0ed3ae-34a8-4368-8e1c-385033e46f0e-kube-api-access-hhqn7") pod "kube-proxy-k6zsl" (UID: "aa0ed3ae-34a8-4368-8e1c-385033e46f0e") : failed to sync configmap cache: timed out waiting for the condition
	Nov 09 14:13:05 embed-certs-273180 kubelet[1319]: I1109 14:13:05.213166    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k6zsl" podStartSLOduration=2.213140783 podStartE2EDuration="2.213140783s" podCreationTimestamp="2025-11-09 14:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:13:05.20074156 +0000 UTC m=+7.186535379" watchObservedRunningTime="2025-11-09 14:13:05.213140783 +0000 UTC m=+7.198934605"
	Nov 09 14:13:05 embed-certs-273180 kubelet[1319]: I1109 14:13:05.213272    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-scgq8" podStartSLOduration=2.213267031 podStartE2EDuration="2.213267031s" podCreationTimestamp="2025-11-09 14:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:13:05.213080681 +0000 UTC m=+7.198874502" watchObservedRunningTime="2025-11-09 14:13:05.213267031 +0000 UTC m=+7.199060852"
	Nov 09 14:13:15 embed-certs-273180 kubelet[1319]: I1109 14:13:15.644942    1319 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 09 14:13:15 embed-certs-273180 kubelet[1319]: I1109 14:13:15.739590    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6f42679-62a3-4b25-9119-c08fe6b07c0c-config-volume\") pod \"coredns-66bc5c9577-bbnm4\" (UID: \"b6f42679-62a3-4b25-9119-c08fe6b07c0c\") " pod="kube-system/coredns-66bc5c9577-bbnm4"
	Nov 09 14:13:15 embed-certs-273180 kubelet[1319]: I1109 14:13:15.739635    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d9104f3d-417a-49dc-86ba-af31925458bb-tmp\") pod \"storage-provisioner\" (UID: \"d9104f3d-417a-49dc-86ba-af31925458bb\") " pod="kube-system/storage-provisioner"
	Nov 09 14:13:15 embed-certs-273180 kubelet[1319]: I1109 14:13:15.739732    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng69n\" (UniqueName: \"kubernetes.io/projected/b6f42679-62a3-4b25-9119-c08fe6b07c0c-kube-api-access-ng69n\") pod \"coredns-66bc5c9577-bbnm4\" (UID: \"b6f42679-62a3-4b25-9119-c08fe6b07c0c\") " pod="kube-system/coredns-66bc5c9577-bbnm4"
	Nov 09 14:13:15 embed-certs-273180 kubelet[1319]: I1109 14:13:15.739816    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxjbz\" (UniqueName: \"kubernetes.io/projected/d9104f3d-417a-49dc-86ba-af31925458bb-kube-api-access-vxjbz\") pod \"storage-provisioner\" (UID: \"d9104f3d-417a-49dc-86ba-af31925458bb\") " pod="kube-system/storage-provisioner"
	Nov 09 14:13:16 embed-certs-273180 kubelet[1319]: I1109 14:13:16.224910    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bbnm4" podStartSLOduration=13.2248879 podStartE2EDuration="13.2248879s" podCreationTimestamp="2025-11-09 14:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:13:16.224858882 +0000 UTC m=+18.210652704" watchObservedRunningTime="2025-11-09 14:13:16.2248879 +0000 UTC m=+18.210681721"
	Nov 09 14:13:16 embed-certs-273180 kubelet[1319]: I1109 14:13:16.247185    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.247162896 podStartE2EDuration="13.247162896s" podCreationTimestamp="2025-11-09 14:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:13:16.247101894 +0000 UTC m=+18.232895716" watchObservedRunningTime="2025-11-09 14:13:16.247162896 +0000 UTC m=+18.232956720"
	Nov 09 14:13:18 embed-certs-273180 kubelet[1319]: I1109 14:13:18.255032    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvqvj\" (UniqueName: \"kubernetes.io/projected/e136284b-ac76-4b4f-ba01-633f83baa0e8-kube-api-access-nvqvj\") pod \"busybox\" (UID: \"e136284b-ac76-4b4f-ba01-633f83baa0e8\") " pod="default/busybox"
	
	
	==> storage-provisioner [89b1c2554e91760e1d5aba2c5730e782778b6ea65bf95cb49ffd6de01f600df2] <==
	I1109 14:13:16.041180       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:13:16.049809       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:13:16.049858       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 14:13:16.051762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:16.055990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:13:16.056127       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:13:16.056194       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ee6b9ad1-7e0f-4b6d-8696-e4410f1b9328", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-273180_099eea67-41f5-4627-b235-e87f2e50b234 became leader
	I1109 14:13:16.056282       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-273180_099eea67-41f5-4627-b235-e87f2e50b234!
	W1109 14:13:16.060052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:16.064108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:13:16.157205       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-273180_099eea67-41f5-4627-b235-e87f2e50b234!
	W1109 14:13:18.067573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:18.073032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:20.076689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:20.168343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:22.171253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:22.254085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:24.257372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:24.261218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:26.264803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:26.268921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-273180 -n embed-certs-273180
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-273180 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-169816 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-169816 --alsologtostderr -v=1: exit status 80 (2.186687734s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-169816 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:13:32.561507  260160 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:13:32.561790  260160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:13:32.561800  260160 out.go:374] Setting ErrFile to fd 2...
	I1109 14:13:32.561806  260160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:13:32.562008  260160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:13:32.562225  260160 out.go:368] Setting JSON to false
	I1109 14:13:32.562273  260160 mustload.go:66] Loading cluster: old-k8s-version-169816
	I1109 14:13:32.562592  260160 config.go:182] Loaded profile config "old-k8s-version-169816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1109 14:13:32.562988  260160 cli_runner.go:164] Run: docker container inspect old-k8s-version-169816 --format={{.State.Status}}
	I1109 14:13:32.580493  260160 host.go:66] Checking if "old-k8s-version-169816" exists ...
	I1109 14:13:32.580801  260160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:13:32.639955  260160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:90 OomKillDisable:false NGoroutines:94 SystemTime:2025-11-09 14:13:32.629326231 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:13:32.640532  260160 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-169816 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1109 14:13:32.642418  260160 out.go:179] * Pausing node old-k8s-version-169816 ... 
	I1109 14:13:32.643461  260160 host.go:66] Checking if "old-k8s-version-169816" exists ...
	I1109 14:13:32.643769  260160 ssh_runner.go:195] Run: systemctl --version
	I1109 14:13:32.643814  260160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169816
	I1109 14:13:32.660518  260160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/old-k8s-version-169816/id_rsa Username:docker}
	I1109 14:13:32.752896  260160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:13:32.776868  260160 pause.go:52] kubelet running: true
	I1109 14:13:32.776952  260160 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:13:32.933005  260160 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:13:32.933119  260160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:13:32.994978  260160 cri.go:89] found id: "e556b09a1766377ab449cade3e737046d611f33f22a514ef8afa52ec096e82fa"
	I1109 14:13:32.995005  260160 cri.go:89] found id: "545a19c0aceb77a225bc0b4f41cc94737c4b393be192c5431942f1ca5716bb80"
	I1109 14:13:32.995011  260160 cri.go:89] found id: "bcf75d94a9dc6663fd1f0d1a24e10fdcd1c666fa884d21235773bb0d377856fc"
	I1109 14:13:32.995017  260160 cri.go:89] found id: "ebbd92bd47e4fe8bee5069fb179364ea166dbcf2c0168bd25cf3692734490d1b"
	I1109 14:13:32.995022  260160 cri.go:89] found id: "540482e832269305a074529fb0e1c0638067596f1030ebd9cff2130f4a71b8d0"
	I1109 14:13:32.995026  260160 cri.go:89] found id: "42a9a6c58384f12f9ac88b28ed9881f46da5d5ba7cacd3e83d0b643736dfe489"
	I1109 14:13:32.995031  260160 cri.go:89] found id: "2b36ea96b26225b43c2ec83d436d026e38a7613c24eadfbcb3d971fe39d0671b"
	I1109 14:13:32.995035  260160 cri.go:89] found id: "fe1074945f47108035d7de260124d948b1a6cc022b75173093c351eed9c62fe8"
	I1109 14:13:32.995039  260160 cri.go:89] found id: "d602ff875b92b7937f6fd0b9e58ec36e97373d0bb858bcc87ab19cd3955c7caa"
	I1109 14:13:32.995047  260160 cri.go:89] found id: "02e49d47c1f9e63a3a265c977f212e64f614d971a7730fcecb06fa865e3fd2bd"
	I1109 14:13:32.995056  260160 cri.go:89] found id: "9d2477a32ffe81fe2b753f7e91fc77dbab0cbafcfb806221411b016a92e93c58"
	I1109 14:13:32.995060  260160 cri.go:89] found id: ""
	I1109 14:13:32.995100  260160 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:13:33.006715  260160 retry.go:31] will retry after 360.580135ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:13:33Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:13:33.368348  260160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:13:33.380563  260160 pause.go:52] kubelet running: false
	I1109 14:13:33.380657  260160 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:13:33.524564  260160 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:13:33.524637  260160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:13:33.591024  260160 cri.go:89] found id: "e556b09a1766377ab449cade3e737046d611f33f22a514ef8afa52ec096e82fa"
	I1109 14:13:33.591050  260160 cri.go:89] found id: "545a19c0aceb77a225bc0b4f41cc94737c4b393be192c5431942f1ca5716bb80"
	I1109 14:13:33.591056  260160 cri.go:89] found id: "bcf75d94a9dc6663fd1f0d1a24e10fdcd1c666fa884d21235773bb0d377856fc"
	I1109 14:13:33.591060  260160 cri.go:89] found id: "ebbd92bd47e4fe8bee5069fb179364ea166dbcf2c0168bd25cf3692734490d1b"
	I1109 14:13:33.591064  260160 cri.go:89] found id: "540482e832269305a074529fb0e1c0638067596f1030ebd9cff2130f4a71b8d0"
	I1109 14:13:33.591069  260160 cri.go:89] found id: "42a9a6c58384f12f9ac88b28ed9881f46da5d5ba7cacd3e83d0b643736dfe489"
	I1109 14:13:33.591073  260160 cri.go:89] found id: "2b36ea96b26225b43c2ec83d436d026e38a7613c24eadfbcb3d971fe39d0671b"
	I1109 14:13:33.591077  260160 cri.go:89] found id: "fe1074945f47108035d7de260124d948b1a6cc022b75173093c351eed9c62fe8"
	I1109 14:13:33.591081  260160 cri.go:89] found id: "d602ff875b92b7937f6fd0b9e58ec36e97373d0bb858bcc87ab19cd3955c7caa"
	I1109 14:13:33.591096  260160 cri.go:89] found id: "02e49d47c1f9e63a3a265c977f212e64f614d971a7730fcecb06fa865e3fd2bd"
	I1109 14:13:33.591104  260160 cri.go:89] found id: "9d2477a32ffe81fe2b753f7e91fc77dbab0cbafcfb806221411b016a92e93c58"
	I1109 14:13:33.591108  260160 cri.go:89] found id: ""
	I1109 14:13:33.591152  260160 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:13:33.602974  260160 retry.go:31] will retry after 226.21593ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:13:33Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:13:33.829336  260160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:13:33.843147  260160 pause.go:52] kubelet running: false
	I1109 14:13:33.843197  260160 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:13:34.003351  260160 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:13:34.003458  260160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:13:34.085927  260160 cri.go:89] found id: "e556b09a1766377ab449cade3e737046d611f33f22a514ef8afa52ec096e82fa"
	I1109 14:13:34.085989  260160 cri.go:89] found id: "545a19c0aceb77a225bc0b4f41cc94737c4b393be192c5431942f1ca5716bb80"
	I1109 14:13:34.086075  260160 cri.go:89] found id: "bcf75d94a9dc6663fd1f0d1a24e10fdcd1c666fa884d21235773bb0d377856fc"
	I1109 14:13:34.086083  260160 cri.go:89] found id: "ebbd92bd47e4fe8bee5069fb179364ea166dbcf2c0168bd25cf3692734490d1b"
	I1109 14:13:34.086088  260160 cri.go:89] found id: "540482e832269305a074529fb0e1c0638067596f1030ebd9cff2130f4a71b8d0"
	I1109 14:13:34.086093  260160 cri.go:89] found id: "42a9a6c58384f12f9ac88b28ed9881f46da5d5ba7cacd3e83d0b643736dfe489"
	I1109 14:13:34.086101  260160 cri.go:89] found id: "2b36ea96b26225b43c2ec83d436d026e38a7613c24eadfbcb3d971fe39d0671b"
	I1109 14:13:34.086105  260160 cri.go:89] found id: "fe1074945f47108035d7de260124d948b1a6cc022b75173093c351eed9c62fe8"
	I1109 14:13:34.086110  260160 cri.go:89] found id: "d602ff875b92b7937f6fd0b9e58ec36e97373d0bb858bcc87ab19cd3955c7caa"
	I1109 14:13:34.086117  260160 cri.go:89] found id: "02e49d47c1f9e63a3a265c977f212e64f614d971a7730fcecb06fa865e3fd2bd"
	I1109 14:13:34.086121  260160 cri.go:89] found id: "9d2477a32ffe81fe2b753f7e91fc77dbab0cbafcfb806221411b016a92e93c58"
	I1109 14:13:34.086137  260160 cri.go:89] found id: ""
	I1109 14:13:34.086181  260160 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:13:34.100513  260160 retry.go:31] will retry after 349.321167ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:13:34Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:13:34.450854  260160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:13:34.463355  260160 pause.go:52] kubelet running: false
	I1109 14:13:34.463411  260160 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:13:34.599720  260160 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:13:34.599790  260160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:13:34.664891  260160 cri.go:89] found id: "e556b09a1766377ab449cade3e737046d611f33f22a514ef8afa52ec096e82fa"
	I1109 14:13:34.664919  260160 cri.go:89] found id: "545a19c0aceb77a225bc0b4f41cc94737c4b393be192c5431942f1ca5716bb80"
	I1109 14:13:34.664925  260160 cri.go:89] found id: "bcf75d94a9dc6663fd1f0d1a24e10fdcd1c666fa884d21235773bb0d377856fc"
	I1109 14:13:34.664930  260160 cri.go:89] found id: "ebbd92bd47e4fe8bee5069fb179364ea166dbcf2c0168bd25cf3692734490d1b"
	I1109 14:13:34.664935  260160 cri.go:89] found id: "540482e832269305a074529fb0e1c0638067596f1030ebd9cff2130f4a71b8d0"
	I1109 14:13:34.664941  260160 cri.go:89] found id: "42a9a6c58384f12f9ac88b28ed9881f46da5d5ba7cacd3e83d0b643736dfe489"
	I1109 14:13:34.664945  260160 cri.go:89] found id: "2b36ea96b26225b43c2ec83d436d026e38a7613c24eadfbcb3d971fe39d0671b"
	I1109 14:13:34.664949  260160 cri.go:89] found id: "fe1074945f47108035d7de260124d948b1a6cc022b75173093c351eed9c62fe8"
	I1109 14:13:34.664952  260160 cri.go:89] found id: "d602ff875b92b7937f6fd0b9e58ec36e97373d0bb858bcc87ab19cd3955c7caa"
	I1109 14:13:34.664978  260160 cri.go:89] found id: "02e49d47c1f9e63a3a265c977f212e64f614d971a7730fcecb06fa865e3fd2bd"
	I1109 14:13:34.664982  260160 cri.go:89] found id: "9d2477a32ffe81fe2b753f7e91fc77dbab0cbafcfb806221411b016a92e93c58"
	I1109 14:13:34.664987  260160 cri.go:89] found id: ""
	I1109 14:13:34.665026  260160 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:13:34.678742  260160 out.go:203] 
	W1109 14:13:34.679866  260160 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:13:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:13:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 14:13:34.679885  260160 out.go:285] * 
	* 
	W1109 14:13:34.684261  260160 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 14:13:34.685391  260160 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-169816 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-169816
helpers_test.go:243: (dbg) docker inspect old-k8s-version-169816:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9",
	        "Created": "2025-11-09T14:11:19.114933288Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 244313,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:12:29.318972385Z",
	            "FinishedAt": "2025-11-09T14:12:28.373951801Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9/hostname",
	        "HostsPath": "/var/lib/docker/containers/7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9/hosts",
	        "LogPath": "/var/lib/docker/containers/7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9/7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9-json.log",
	        "Name": "/old-k8s-version-169816",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-169816:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-169816",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9",
	                "LowerDir": "/var/lib/docker/overlay2/a9f911d75fdbe90920747d9ae87079b6baa03d141bcceb4514f616f7af7bafda-init/diff:/var/lib/docker/overlay2/d0c6d4b4ffbfb6af64ec3408030fc54111fe30d500088a5ae947d598cfc72f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a9f911d75fdbe90920747d9ae87079b6baa03d141bcceb4514f616f7af7bafda/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a9f911d75fdbe90920747d9ae87079b6baa03d141bcceb4514f616f7af7bafda/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a9f911d75fdbe90920747d9ae87079b6baa03d141bcceb4514f616f7af7bafda/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-169816",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-169816/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-169816",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-169816",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-169816",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "441c3d481fe83d1019c0e25bc13ee7db4af53af94fa590af3a2e1e1a84db4724",
	            "SandboxKey": "/var/run/docker/netns/441c3d481fe8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-169816": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:96:0d:f5:e1:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f0ef03f929b33a2352ddcf362b70e81410120fda868115e956b4bb456ca7cf63",
	                    "EndpointID": "35113df863827a82e0c3c7352c8d866c41e41f82ae88f0ace807f6896242b37c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-169816",
	                        "7b32476bd090"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169816 -n old-k8s-version-169816
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169816 -n old-k8s-version-169816: exit status 2 (322.518667ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-169816 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-169816 logs -n 25: (1.069000907s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p pause-092489 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-092489                 │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ start   │ -p old-k8s-version-169816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:12 UTC │
	│ pause   │ -p pause-092489 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-092489                 │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │                     │
	│ delete  │ -p pause-092489                                                                                                                                                                                                                               │ pause-092489                 │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ start   │ -p no-preload-152932 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:12 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-169816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │                     │
	│ stop    │ -p old-k8s-version-169816 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p cert-expiration-883873 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-883873       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-152932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-169816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p old-k8s-version-169816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:13 UTC │
	│ stop    │ -p no-preload-152932 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ delete  │ -p cert-expiration-883873                                                                                                                                                                                                                     │ cert-expiration-883873       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p embed-certs-273180 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:13 UTC │
	│ addons  │ enable dashboard -p no-preload-152932 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p no-preload-152932 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p kubernetes-upgrade-755159 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-755159    │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ start   │ -p kubernetes-upgrade-755159 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-755159    │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p kubernetes-upgrade-755159                                                                                                                                                                                                                  │ kubernetes-upgrade-755159    │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p disable-driver-mounts-565545                                                                                                                                                                                                               │ disable-driver-mounts-565545 │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p default-k8s-diff-port-326524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-273180 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ stop    │ -p embed-certs-273180 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ image   │ old-k8s-version-169816 image list --format=json                                                                                                                                                                                               │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ pause   │ -p old-k8s-version-169816 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:13:17
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:13:17.185719  256773 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:13:17.185955  256773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:13:17.185962  256773 out.go:374] Setting ErrFile to fd 2...
	I1109 14:13:17.185966  256773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:13:17.186158  256773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:13:17.186568  256773 out.go:368] Setting JSON to false
	I1109 14:13:17.187686  256773 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3347,"bootTime":1762694250,"procs":356,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:13:17.187762  256773 start.go:143] virtualization: kvm guest
	I1109 14:13:17.189520  256773 out.go:179] * [default-k8s-diff-port-326524] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:13:17.191078  256773 notify.go:221] Checking for updates...
	I1109 14:13:17.191098  256773 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:13:17.192262  256773 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:13:17.193437  256773 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:13:17.194507  256773 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 14:13:17.195596  256773 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:13:17.196680  256773 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:13:17.198200  256773 config.go:182] Loaded profile config "embed-certs-273180": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:13:17.198297  256773 config.go:182] Loaded profile config "no-preload-152932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:13:17.198363  256773 config.go:182] Loaded profile config "old-k8s-version-169816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1109 14:13:17.198433  256773 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:13:17.223091  256773 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 14:13:17.223203  256773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:13:17.287312  256773 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-09 14:13:17.275776886 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:13:17.287415  256773 docker.go:319] overlay module found
	I1109 14:13:17.289116  256773 out.go:179] * Using the docker driver based on user configuration
	I1109 14:13:17.290229  256773 start.go:309] selected driver: docker
	I1109 14:13:17.290242  256773 start.go:930] validating driver "docker" against <nil>
	I1109 14:13:17.290253  256773 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:13:17.290851  256773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:13:17.343940  256773 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-09 14:13:17.334239049 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:13:17.344115  256773 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 14:13:17.344374  256773 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:13:17.345927  256773 out.go:179] * Using Docker driver with root privileges
	I1109 14:13:17.347106  256773 cni.go:84] Creating CNI manager for ""
	I1109 14:13:17.347162  256773 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:13:17.347171  256773 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 14:13:17.347232  256773 start.go:353] cluster config:
	{Name:default-k8s-diff-port-326524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-326524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:13:17.348420  256773 out.go:179] * Starting "default-k8s-diff-port-326524" primary control-plane node in "default-k8s-diff-port-326524" cluster
	I1109 14:13:17.349353  256773 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:13:17.350328  256773 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:13:17.351300  256773 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:13:17.351333  256773 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 14:13:17.351341  256773 cache.go:65] Caching tarball of preloaded images
	I1109 14:13:17.351378  256773 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:13:17.351457  256773 preload.go:238] Found /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 14:13:17.351473  256773 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:13:17.351571  256773 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/config.json ...
	I1109 14:13:17.351595  256773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/config.json: {Name:mk6fab699afd6d53f2fdcb141a735fa8da65c44d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:17.370665  256773 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:13:17.370687  256773 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:13:17.370711  256773 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:13:17.370737  256773 start.go:360] acquireMachinesLock for default-k8s-diff-port-326524: {Name:mk380b0156a652cb7885053d4cba5ab348316819 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:13:17.370865  256773 start.go:364] duration metric: took 106.738µs to acquireMachinesLock for "default-k8s-diff-port-326524"
	I1109 14:13:17.370892  256773 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-326524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-326524 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:13:17.370981  256773 start.go:125] createHost starting for "" (driver="docker")
	W1109 14:13:15.652225  246717 node_ready.go:57] node "embed-certs-273180" has "Ready":"False" status (will retry)
	I1109 14:13:16.152405  246717 node_ready.go:49] node "embed-certs-273180" is "Ready"
	I1109 14:13:16.152430  246717 node_ready.go:38] duration metric: took 12.503394947s for node "embed-certs-273180" to be "Ready" ...
	I1109 14:13:16.152444  246717 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:13:16.152482  246717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:13:16.165139  246717 api_server.go:72] duration metric: took 12.905051503s to wait for apiserver process to appear ...
	I1109 14:13:16.165160  246717 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:13:16.165174  246717 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1109 14:13:16.169192  246717 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1109 14:13:16.170050  246717 api_server.go:141] control plane version: v1.34.1
	I1109 14:13:16.170071  246717 api_server.go:131] duration metric: took 4.906156ms to wait for apiserver health ...
	I1109 14:13:16.170079  246717 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:13:16.174346  246717 system_pods.go:59] 8 kube-system pods found
	I1109 14:13:16.174385  246717 system_pods.go:61] "coredns-66bc5c9577-bbnm4" [b6f42679-62a3-4b25-9119-c08fe6b07c0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:13:16.174393  246717 system_pods.go:61] "etcd-embed-certs-273180" [bbb903eb-c06b-4c1e-948e-3f5db3af34b4] Running
	I1109 14:13:16.174402  246717 system_pods.go:61] "kindnet-scgq8" [5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba] Running
	I1109 14:13:16.174413  246717 system_pods.go:61] "kube-apiserver-embed-certs-273180" [0f55bcc7-9c38-4bb2-96b6-ff2012e9d407] Running
	I1109 14:13:16.174419  246717 system_pods.go:61] "kube-controller-manager-embed-certs-273180" [c03a3fa3-7a12-47a2-b628-eae16a25cfb8] Running
	I1109 14:13:16.174424  246717 system_pods.go:61] "kube-proxy-k6zsl" [aa0ed3ae-34a8-4368-8e1c-385033e46f0e] Running
	I1109 14:13:16.174429  246717 system_pods.go:61] "kube-scheduler-embed-certs-273180" [68f3160d-05e0-491f-8254-44379226803a] Running
	I1109 14:13:16.174436  246717 system_pods.go:61] "storage-provisioner" [d9104f3d-417a-49dc-86ba-af31925458bb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:13:16.174447  246717 system_pods.go:74] duration metric: took 4.362755ms to wait for pod list to return data ...
	I1109 14:13:16.174463  246717 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:13:16.176894  246717 default_sa.go:45] found service account: "default"
	I1109 14:13:16.176922  246717 default_sa.go:55] duration metric: took 2.451484ms for default service account to be created ...
	I1109 14:13:16.176931  246717 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:13:16.181020  246717 system_pods.go:86] 8 kube-system pods found
	I1109 14:13:16.181045  246717 system_pods.go:89] "coredns-66bc5c9577-bbnm4" [b6f42679-62a3-4b25-9119-c08fe6b07c0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:13:16.181052  246717 system_pods.go:89] "etcd-embed-certs-273180" [bbb903eb-c06b-4c1e-948e-3f5db3af34b4] Running
	I1109 14:13:16.181060  246717 system_pods.go:89] "kindnet-scgq8" [5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba] Running
	I1109 14:13:16.181064  246717 system_pods.go:89] "kube-apiserver-embed-certs-273180" [0f55bcc7-9c38-4bb2-96b6-ff2012e9d407] Running
	I1109 14:13:16.181071  246717 system_pods.go:89] "kube-controller-manager-embed-certs-273180" [c03a3fa3-7a12-47a2-b628-eae16a25cfb8] Running
	I1109 14:13:16.181075  246717 system_pods.go:89] "kube-proxy-k6zsl" [aa0ed3ae-34a8-4368-8e1c-385033e46f0e] Running
	I1109 14:13:16.181081  246717 system_pods.go:89] "kube-scheduler-embed-certs-273180" [68f3160d-05e0-491f-8254-44379226803a] Running
	I1109 14:13:16.181093  246717 system_pods.go:89] "storage-provisioner" [d9104f3d-417a-49dc-86ba-af31925458bb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:13:16.181114  246717 retry.go:31] will retry after 197.947699ms: missing components: kube-dns
	I1109 14:13:16.384079  246717 system_pods.go:86] 8 kube-system pods found
	I1109 14:13:16.384113  246717 system_pods.go:89] "coredns-66bc5c9577-bbnm4" [b6f42679-62a3-4b25-9119-c08fe6b07c0c] Running
	I1109 14:13:16.384119  246717 system_pods.go:89] "etcd-embed-certs-273180" [bbb903eb-c06b-4c1e-948e-3f5db3af34b4] Running
	I1109 14:13:16.384123  246717 system_pods.go:89] "kindnet-scgq8" [5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba] Running
	I1109 14:13:16.384127  246717 system_pods.go:89] "kube-apiserver-embed-certs-273180" [0f55bcc7-9c38-4bb2-96b6-ff2012e9d407] Running
	I1109 14:13:16.384134  246717 system_pods.go:89] "kube-controller-manager-embed-certs-273180" [c03a3fa3-7a12-47a2-b628-eae16a25cfb8] Running
	I1109 14:13:16.384143  246717 system_pods.go:89] "kube-proxy-k6zsl" [aa0ed3ae-34a8-4368-8e1c-385033e46f0e] Running
	I1109 14:13:16.384148  246717 system_pods.go:89] "kube-scheduler-embed-certs-273180" [68f3160d-05e0-491f-8254-44379226803a] Running
	I1109 14:13:16.384153  246717 system_pods.go:89] "storage-provisioner" [d9104f3d-417a-49dc-86ba-af31925458bb] Running
	I1109 14:13:16.384163  246717 system_pods.go:126] duration metric: took 207.224839ms to wait for k8s-apps to be running ...
	I1109 14:13:16.384180  246717 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:13:16.384240  246717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:13:16.399990  246717 system_svc.go:56] duration metric: took 15.781627ms WaitForService to wait for kubelet
	I1109 14:13:16.400025  246717 kubeadm.go:587] duration metric: took 13.139938623s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:13:16.400050  246717 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:13:16.403106  246717 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:13:16.403135  246717 node_conditions.go:123] node cpu capacity is 8
	I1109 14:13:16.403153  246717 node_conditions.go:105] duration metric: took 3.089934ms to run NodePressure ...
	I1109 14:13:16.403168  246717 start.go:242] waiting for startup goroutines ...
	I1109 14:13:16.403182  246717 start.go:247] waiting for cluster config update ...
	I1109 14:13:16.403195  246717 start.go:256] writing updated cluster config ...
	I1109 14:13:16.403401  246717 ssh_runner.go:195] Run: rm -f paused
	I1109 14:13:16.407239  246717 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:13:16.410710  246717 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bbnm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.414784  246717 pod_ready.go:94] pod "coredns-66bc5c9577-bbnm4" is "Ready"
	I1109 14:13:16.414802  246717 pod_ready.go:86] duration metric: took 4.066203ms for pod "coredns-66bc5c9577-bbnm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.416531  246717 pod_ready.go:83] waiting for pod "etcd-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.420036  246717 pod_ready.go:94] pod "etcd-embed-certs-273180" is "Ready"
	I1109 14:13:16.420052  246717 pod_ready.go:86] duration metric: took 3.498205ms for pod "etcd-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.422008  246717 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.425498  246717 pod_ready.go:94] pod "kube-apiserver-embed-certs-273180" is "Ready"
	I1109 14:13:16.425518  246717 pod_ready.go:86] duration metric: took 3.492681ms for pod "kube-apiserver-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.427143  246717 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.811543  246717 pod_ready.go:94] pod "kube-controller-manager-embed-certs-273180" is "Ready"
	I1109 14:13:16.811564  246717 pod_ready.go:86] duration metric: took 384.404326ms for pod "kube-controller-manager-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:17.011653  246717 pod_ready.go:83] waiting for pod "kube-proxy-k6zsl" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:17.412029  246717 pod_ready.go:94] pod "kube-proxy-k6zsl" is "Ready"
	I1109 14:13:17.412057  246717 pod_ready.go:86] duration metric: took 400.379485ms for pod "kube-proxy-k6zsl" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:17.612189  246717 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:18.011980  246717 pod_ready.go:94] pod "kube-scheduler-embed-certs-273180" is "Ready"
	I1109 14:13:18.012004  246717 pod_ready.go:86] duration metric: took 399.78913ms for pod "kube-scheduler-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:18.012015  246717 pod_ready.go:40] duration metric: took 1.604746997s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:13:18.062408  246717 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:13:18.063827  246717 out.go:179] * Done! kubectl is now configured to use "embed-certs-273180" cluster and "default" namespace by default
	W1109 14:13:14.734612  243958 pod_ready.go:104] pod "coredns-5dd5756b68-5bgfs" is not "Ready", error: <nil>
	W1109 14:13:16.735051  243958 pod_ready.go:104] pod "coredns-5dd5756b68-5bgfs" is not "Ready", error: <nil>
	W1109 14:13:15.269842  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	W1109 14:13:17.273879  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	I1109 14:13:19.234767  243958 pod_ready.go:94] pod "coredns-5dd5756b68-5bgfs" is "Ready"
	I1109 14:13:19.234799  243958 pod_ready.go:86] duration metric: took 38.505686458s for pod "coredns-5dd5756b68-5bgfs" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.237750  243958 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.242233  243958 pod_ready.go:94] pod "etcd-old-k8s-version-169816" is "Ready"
	I1109 14:13:19.242258  243958 pod_ready.go:86] duration metric: took 4.482172ms for pod "etcd-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.245330  243958 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.249728  243958 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-169816" is "Ready"
	I1109 14:13:19.249746  243958 pod_ready.go:86] duration metric: took 4.394681ms for pod "kube-apiserver-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.252151  243958 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.433101  243958 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-169816" is "Ready"
	I1109 14:13:19.433127  243958 pod_ready.go:86] duration metric: took 180.958702ms for pod "kube-controller-manager-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.634032  243958 pod_ready.go:83] waiting for pod "kube-proxy-96xbm" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:20.033559  243958 pod_ready.go:94] pod "kube-proxy-96xbm" is "Ready"
	I1109 14:13:20.033591  243958 pod_ready.go:86] duration metric: took 399.53199ms for pod "kube-proxy-96xbm" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:20.233541  243958 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:20.632834  243958 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-169816" is "Ready"
	I1109 14:13:20.632865  243958 pod_ready.go:86] duration metric: took 399.296239ms for pod "kube-scheduler-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:20.632880  243958 pod_ready.go:40] duration metric: took 39.910042807s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:13:20.676081  243958 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1109 14:13:20.700243  243958 out.go:203] 
	W1109 14:13:20.702346  243958 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1109 14:13:20.704036  243958 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1109 14:13:20.709285  243958 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-169816" cluster and "default" namespace by default
	I1109 14:13:17.373015  256773 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1109 14:13:17.373227  256773 start.go:159] libmachine.API.Create for "default-k8s-diff-port-326524" (driver="docker")
	I1109 14:13:17.373253  256773 client.go:173] LocalClient.Create starting
	I1109 14:13:17.373339  256773 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem
	I1109 14:13:17.373379  256773 main.go:143] libmachine: Decoding PEM data...
	I1109 14:13:17.373402  256773 main.go:143] libmachine: Parsing certificate...
	I1109 14:13:17.373471  256773 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem
	I1109 14:13:17.373499  256773 main.go:143] libmachine: Decoding PEM data...
	I1109 14:13:17.373516  256773 main.go:143] libmachine: Parsing certificate...
	I1109 14:13:17.373944  256773 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-326524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 14:13:17.390599  256773 cli_runner.go:211] docker network inspect default-k8s-diff-port-326524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 14:13:17.390692  256773 network_create.go:284] running [docker network inspect default-k8s-diff-port-326524] to gather additional debugging logs...
	I1109 14:13:17.390717  256773 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-326524
	W1109 14:13:17.406825  256773 cli_runner.go:211] docker network inspect default-k8s-diff-port-326524 returned with exit code 1
	I1109 14:13:17.406849  256773 network_create.go:287] error running [docker network inspect default-k8s-diff-port-326524]: docker network inspect default-k8s-diff-port-326524: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-326524 not found
	I1109 14:13:17.406863  256773 network_create.go:289] output of [docker network inspect default-k8s-diff-port-326524]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-326524 not found
	
	** /stderr **
	I1109 14:13:17.406974  256773 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:13:17.424550  256773 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c085420cd01a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:dd:1a:ae:de:73} reservation:<nil>}
	I1109 14:13:17.425251  256773 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-227a1511ff5a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4a:00:84:88:a9:17} reservation:<nil>}
	I1109 14:13:17.425985  256773 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9196665a99b4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:6c:f7:d0:28:4f} reservation:<nil>}
	I1109 14:13:17.426428  256773 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f0ef03f929b3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e6:cd:f4:b2:ad:24} reservation:<nil>}
	I1109 14:13:17.427179  256773 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f05c30}
	I1109 14:13:17.427204  256773 network_create.go:124] attempt to create docker network default-k8s-diff-port-326524 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1109 14:13:17.427254  256773 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-326524 default-k8s-diff-port-326524
	I1109 14:13:17.482890  256773 network_create.go:108] docker network default-k8s-diff-port-326524 192.168.85.0/24 created
	I1109 14:13:17.482919  256773 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-326524" container
	I1109 14:13:17.482985  256773 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 14:13:17.499994  256773 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-326524 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-326524 --label created_by.minikube.sigs.k8s.io=true
	I1109 14:13:17.516895  256773 oci.go:103] Successfully created a docker volume default-k8s-diff-port-326524
	I1109 14:13:17.516975  256773 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-326524-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-326524 --entrypoint /usr/bin/test -v default-k8s-diff-port-326524:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 14:13:17.902500  256773 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-326524
	I1109 14:13:17.902548  256773 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:13:17.902557  256773 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 14:13:17.902632  256773 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-326524:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1109 14:13:19.770606  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	W1109 14:13:21.877241  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	W1109 14:13:24.270217  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	I1109 14:13:22.272330  256773 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-326524:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.369635445s)
	I1109 14:13:22.272359  256773 kic.go:203] duration metric: took 4.369799264s to extract preloaded images to volume ...
	W1109 14:13:22.272424  256773 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1109 14:13:22.272451  256773 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1109 14:13:22.272482  256773 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 14:13:22.331245  256773 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-326524 --name default-k8s-diff-port-326524 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-326524 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-326524 --network default-k8s-diff-port-326524 --ip 192.168.85.2 --volume default-k8s-diff-port-326524:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 14:13:22.644835  256773 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-326524 --format={{.State.Running}}
	I1109 14:13:22.662519  256773 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-326524 --format={{.State.Status}}
	I1109 14:13:22.679685  256773 cli_runner.go:164] Run: docker exec default-k8s-diff-port-326524 stat /var/lib/dpkg/alternatives/iptables
	I1109 14:13:22.729821  256773 oci.go:144] the created container "default-k8s-diff-port-326524" has a running status.
	I1109 14:13:22.729857  256773 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa...
	I1109 14:13:22.900781  256773 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 14:13:22.928554  256773 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-326524 --format={{.State.Status}}
	I1109 14:13:22.945568  256773 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 14:13:22.945590  256773 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-326524 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 14:13:22.991620  256773 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-326524 --format={{.State.Status}}
	I1109 14:13:23.008209  256773 machine.go:94] provisionDockerMachine start ...
	I1109 14:13:23.008279  256773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:13:23.024389  256773 main.go:143] libmachine: Using SSH client type: native
	I1109 14:13:23.024609  256773 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1109 14:13:23.024621  256773 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:13:23.025379  256773 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42224->127.0.0.1:33080: read: connection reset by peer
	I1109 14:13:26.154226  256773 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-326524
	
	I1109 14:13:26.154256  256773 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-326524"
	I1109 14:13:26.154315  256773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:13:26.172582  256773 main.go:143] libmachine: Using SSH client type: native
	I1109 14:13:26.172818  256773 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1109 14:13:26.172834  256773 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-326524 && echo "default-k8s-diff-port-326524" | sudo tee /etc/hostname
	I1109 14:13:26.323588  256773 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-326524
	
	I1109 14:13:26.323708  256773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:13:26.342328  256773 main.go:143] libmachine: Using SSH client type: native
	I1109 14:13:26.342547  256773 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1109 14:13:26.342576  256773 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-326524' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-326524/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-326524' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:13:26.473700  256773 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:13:26.473727  256773 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-5854/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-5854/.minikube}
	I1109 14:13:26.473755  256773 ubuntu.go:190] setting up certificates
	I1109 14:13:26.473763  256773 provision.go:84] configureAuth start
	I1109 14:13:26.473804  256773 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-326524
	I1109 14:13:26.492949  256773 provision.go:143] copyHostCerts
	I1109 14:13:26.493003  256773 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem, removing ...
	I1109 14:13:26.493012  256773 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem
	I1109 14:13:26.493072  256773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem (1123 bytes)
	I1109 14:13:26.493164  256773 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem, removing ...
	I1109 14:13:26.493173  256773 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem
	I1109 14:13:26.493202  256773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem (1679 bytes)
	I1109 14:13:26.493263  256773 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem, removing ...
	I1109 14:13:26.493280  256773 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem
	I1109 14:13:26.493313  256773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem (1078 bytes)
	I1109 14:13:26.493379  256773 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-326524 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-326524 localhost minikube]
	I1109 14:13:26.896797  256773 provision.go:177] copyRemoteCerts
	I1109 14:13:26.896873  256773 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:13:26.896917  256773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:13:26.917511  256773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa Username:docker}
	I1109 14:13:27.011768  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 14:13:27.031249  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1109 14:13:27.048800  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:13:27.067605  256773 provision.go:87] duration metric: took 593.830868ms to configureAuth
	I1109 14:13:27.067633  256773 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:13:27.067852  256773 config.go:182] Loaded profile config "default-k8s-diff-port-326524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:13:27.067992  256773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:13:27.092362  256773 main.go:143] libmachine: Using SSH client type: native
	I1109 14:13:27.092611  256773 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1109 14:13:27.092627  256773 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:13:27.334264  256773 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:13:27.334299  256773 machine.go:97] duration metric: took 4.3260586s to provisionDockerMachine
	I1109 14:13:27.334311  256773 client.go:176] duration metric: took 9.961052551s to LocalClient.Create
	I1109 14:13:27.334338  256773 start.go:167] duration metric: took 9.961109004s to libmachine.API.Create "default-k8s-diff-port-326524"
	I1109 14:13:27.334352  256773 start.go:293] postStartSetup for "default-k8s-diff-port-326524" (driver="docker")
	I1109 14:13:27.334371  256773 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:13:27.334447  256773 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:13:27.334495  256773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:13:27.353631  256773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa Username:docker}
	I1109 14:13:27.449797  256773 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:13:27.453262  256773 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:13:27.453288  256773 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:13:27.453298  256773 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/addons for local assets ...
	I1109 14:13:27.453347  256773 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/files for local assets ...
	I1109 14:13:27.453438  256773 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem -> 93652.pem in /etc/ssl/certs
	I1109 14:13:27.453546  256773 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:13:27.460689  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:13:27.480051  256773 start.go:296] duration metric: took 145.682575ms for postStartSetup
	I1109 14:13:27.480484  256773 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-326524
	I1109 14:13:27.501511  256773 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/config.json ...
	I1109 14:13:27.501851  256773 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:13:27.501906  256773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:13:27.519345  256773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa Username:docker}
	I1109 14:13:27.611382  256773 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:13:27.616531  256773 start.go:128] duration metric: took 10.245537648s to createHost
	I1109 14:13:27.616556  256773 start.go:83] releasing machines lock for "default-k8s-diff-port-326524", held for 10.245678097s
	I1109 14:13:27.616620  256773 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-326524
	I1109 14:13:27.634797  256773 ssh_runner.go:195] Run: cat /version.json
	I1109 14:13:27.634837  256773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:13:27.634876  256773 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:13:27.634952  256773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:13:27.654151  256773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa Username:docker}
	I1109 14:13:27.654731  256773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa Username:docker}
	I1109 14:13:27.745497  256773 ssh_runner.go:195] Run: systemctl --version
	I1109 14:13:27.822905  256773 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:13:27.858502  256773 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:13:27.862875  256773 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:13:27.862940  256773 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:13:27.887767  256773 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1109 14:13:27.887786  256773 start.go:496] detecting cgroup driver to use...
	I1109 14:13:27.887819  256773 detect.go:190] detected "systemd" cgroup driver on host os
	I1109 14:13:27.887869  256773 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:13:27.906100  256773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:13:27.923555  256773 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:13:27.923617  256773 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:13:27.941996  256773 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:13:27.960076  256773 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:13:28.048743  256773 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:13:28.131841  256773 docker.go:234] disabling docker service ...
	I1109 14:13:28.131916  256773 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:13:28.150149  256773 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:13:28.165388  256773 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:13:28.258358  256773 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:13:28.348537  256773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:13:28.361622  256773 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:13:28.376079  256773 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:13:28.376146  256773 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:13:28.387343  256773 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1109 14:13:28.387397  256773 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:13:28.397407  256773 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:13:28.407515  256773 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:13:28.416541  256773 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:13:28.424557  256773 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:13:28.433116  256773 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:13:28.445980  256773 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:13:28.453875  256773 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:13:28.461480  256773 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:13:28.468492  256773 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:13:28.546491  256773 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:13:28.658711  256773 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:13:28.658780  256773 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:13:28.662862  256773 start.go:564] Will wait 60s for crictl version
	I1109 14:13:28.662929  256773 ssh_runner.go:195] Run: which crictl
	I1109 14:13:28.666513  256773 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:13:28.690044  256773 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:13:28.690112  256773 ssh_runner.go:195] Run: crio --version
	I1109 14:13:28.716709  256773 ssh_runner.go:195] Run: crio --version
	I1109 14:13:28.743954  256773 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1109 14:13:26.271214  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	W1109 14:13:28.770277  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	I1109 14:13:28.744983  256773 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-326524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:13:28.762298  256773 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1109 14:13:28.766225  256773 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:13:28.776502  256773 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-326524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-326524 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:13:28.776591  256773 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:13:28.776632  256773 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:13:28.806130  256773 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:13:28.806147  256773 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:13:28.806184  256773 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:13:28.829283  256773 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:13:28.829302  256773 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:13:28.829309  256773 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1109 14:13:28.829391  256773 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-326524 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-326524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:13:28.829460  256773 ssh_runner.go:195] Run: crio config
	I1109 14:13:28.872922  256773 cni.go:84] Creating CNI manager for ""
	I1109 14:13:28.872943  256773 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:13:28.872959  256773 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:13:28.872977  256773 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-326524 NodeName:default-k8s-diff-port-326524 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:13:28.873104  256773 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-326524"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:13:28.873154  256773 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:13:28.880608  256773 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:13:28.880678  256773 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:13:28.888093  256773 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1109 14:13:28.900053  256773 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:13:28.914248  256773 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1109 14:13:28.925998  256773 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:13:28.929269  256773 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:13:28.938258  256773 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:13:29.015833  256773 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:13:29.038481  256773 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524 for IP: 192.168.85.2
	I1109 14:13:29.038497  256773 certs.go:195] generating shared ca certs ...
	I1109 14:13:29.038515  256773 certs.go:227] acquiring lock for ca certs: {Name:mk1fbc7fb5aaf0e87090d75145f91c095ef07289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:29.038714  256773 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key
	I1109 14:13:29.038786  256773 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key
	I1109 14:13:29.038803  256773 certs.go:257] generating profile certs ...
	I1109 14:13:29.038872  256773 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/client.key
	I1109 14:13:29.038905  256773 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/client.crt with IP's: []
	I1109 14:13:29.295188  256773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/client.crt ...
	I1109 14:13:29.295214  256773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/client.crt: {Name:mkc65c63e5dfb9f6a1cb414fc8819b33b9769de4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:29.295397  256773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/client.key ...
	I1109 14:13:29.295415  256773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/client.key: {Name:mk52e554adae895ad33151aafa7eddfb170ea52b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:29.295530  256773 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.key.cfdee782
	I1109 14:13:29.295550  256773 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.crt.cfdee782 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1109 14:13:29.438993  256773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.crt.cfdee782 ...
	I1109 14:13:29.439017  256773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.crt.cfdee782: {Name:mk0622eef1394efac7c41e0f0df9ef51ed04883f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:29.439161  256773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.key.cfdee782 ...
	I1109 14:13:29.439176  256773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.key.cfdee782: {Name:mke14e92c7835ad99d5db72cbf2707d98d6044c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:29.439271  256773 certs.go:382] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.crt.cfdee782 -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.crt
	I1109 14:13:29.439379  256773 certs.go:386] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.key.cfdee782 -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.key
	I1109 14:13:29.439470  256773 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.key
	I1109 14:13:29.439492  256773 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.crt with IP's: []
	I1109 14:13:29.650804  256773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.crt ...
	I1109 14:13:29.650826  256773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.crt: {Name:mk38045e7500a345773acabac6a8a7407942a901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:29.650974  256773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.key ...
	I1109 14:13:29.650992  256773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.key: {Name:mk1b80d178593e643d8fba0be11b96c767a5965f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:29.651184  256773 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem (1338 bytes)
	W1109 14:13:29.651220  256773 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365_empty.pem, impossibly tiny 0 bytes
	I1109 14:13:29.651228  256773 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:13:29.651248  256773 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 14:13:29.651269  256773 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:13:29.651292  256773 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 14:13:29.651330  256773 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:13:29.651872  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:13:29.671783  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:13:29.689915  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:13:29.708380  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:13:29.727345  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:13:29.748028  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:13:29.766716  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:13:29.785630  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:13:29.803761  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem --> /usr/share/ca-certificates/9365.pem (1338 bytes)
	I1109 14:13:29.823198  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /usr/share/ca-certificates/93652.pem (1708 bytes)
	I1109 14:13:29.841761  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:13:29.859990  256773 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:13:29.872969  256773 ssh_runner.go:195] Run: openssl version
	I1109 14:13:29.879976  256773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9365.pem && ln -fs /usr/share/ca-certificates/9365.pem /etc/ssl/certs/9365.pem"
	I1109 14:13:29.888848  256773 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9365.pem
	I1109 14:13:29.892308  256773 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:34 /usr/share/ca-certificates/9365.pem
	I1109 14:13:29.892349  256773 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9365.pem
	I1109 14:13:29.934091  256773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9365.pem /etc/ssl/certs/51391683.0"
	I1109 14:13:29.942269  256773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93652.pem && ln -fs /usr/share/ca-certificates/93652.pem /etc/ssl/certs/93652.pem"
	I1109 14:13:29.950128  256773 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93652.pem
	I1109 14:13:29.953488  256773 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:34 /usr/share/ca-certificates/93652.pem
	I1109 14:13:29.953543  256773 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93652.pem
	I1109 14:13:29.987989  256773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93652.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:13:29.995929  256773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:13:30.003798  256773 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:13:30.007456  256773 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:13:30.007501  256773 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:13:30.041336  256773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:13:30.049027  256773 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:13:30.052388  256773 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:13:30.052432  256773 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-326524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-326524 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:13:30.052504  256773 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:13:30.052539  256773 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:13:30.077541  256773 cri.go:89] found id: ""
	I1109 14:13:30.077599  256773 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:13:30.084834  256773 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:13:30.092103  256773 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:13:30.092142  256773 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:13:30.099409  256773 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:13:30.099425  256773 kubeadm.go:158] found existing configuration files:
	
	I1109 14:13:30.099460  256773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1109 14:13:30.106576  256773 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:13:30.106623  256773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:13:30.113515  256773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1109 14:13:30.120502  256773 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:13:30.120551  256773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:13:30.127514  256773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1109 14:13:30.134534  256773 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:13:30.134569  256773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:13:30.141670  256773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1109 14:13:30.148660  256773 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:13:30.148705  256773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:13:30.155685  256773 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:13:30.215260  256773 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1109 14:13:30.272315  256773 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1109 14:13:31.272385  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	I1109 14:13:32.270386  250803 pod_ready.go:94] pod "coredns-66bc5c9577-6ssc5" is "Ready"
	I1109 14:13:32.270416  250803 pod_ready.go:86] duration metric: took 31.005317937s for pod "coredns-66bc5c9577-6ssc5" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:32.272632  250803 pod_ready.go:83] waiting for pod "etcd-no-preload-152932" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:32.276568  250803 pod_ready.go:94] pod "etcd-no-preload-152932" is "Ready"
	I1109 14:13:32.276589  250803 pod_ready.go:86] duration metric: took 3.926488ms for pod "etcd-no-preload-152932" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:32.278476  250803 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-152932" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:32.282014  250803 pod_ready.go:94] pod "kube-apiserver-no-preload-152932" is "Ready"
	I1109 14:13:32.282034  250803 pod_ready.go:86] duration metric: took 3.536509ms for pod "kube-apiserver-no-preload-152932" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:32.283772  250803 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-152932" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:32.468531  250803 pod_ready.go:94] pod "kube-controller-manager-no-preload-152932" is "Ready"
	I1109 14:13:32.468557  250803 pod_ready.go:86] duration metric: took 184.768044ms for pod "kube-controller-manager-no-preload-152932" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:32.668373  250803 pod_ready.go:83] waiting for pod "kube-proxy-f5tgg" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:33.068187  250803 pod_ready.go:94] pod "kube-proxy-f5tgg" is "Ready"
	I1109 14:13:33.068218  250803 pod_ready.go:86] duration metric: took 399.821537ms for pod "kube-proxy-f5tgg" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:33.267821  250803 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-152932" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:33.668556  250803 pod_ready.go:94] pod "kube-scheduler-no-preload-152932" is "Ready"
	I1109 14:13:33.668585  250803 pod_ready.go:86] duration metric: took 400.741192ms for pod "kube-scheduler-no-preload-152932" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:33.668597  250803 pod_ready.go:40] duration metric: took 32.406224537s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:13:33.710914  250803 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:13:33.713326  250803 out.go:179] * Done! kubectl is now configured to use "no-preload-152932" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:12:58 old-k8s-version-169816 crio[557]: time="2025-11-09T14:12:58.27559591Z" level=info msg="Created container 9d2477a32ffe81fe2b753f7e91fc77dbab0cbafcfb806221411b016a92e93c58: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v6s8t/kubernetes-dashboard" id=b7afe467-f3db-420d-a5e5-78e2fbd19fb9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:12:58 old-k8s-version-169816 crio[557]: time="2025-11-09T14:12:58.276143356Z" level=info msg="Starting container: 9d2477a32ffe81fe2b753f7e91fc77dbab0cbafcfb806221411b016a92e93c58" id=d728fed4-7bf0-40fd-a8a9-03a88e6b0895 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:12:58 old-k8s-version-169816 crio[557]: time="2025-11-09T14:12:58.278306746Z" level=info msg="Started container" PID=1715 containerID=9d2477a32ffe81fe2b753f7e91fc77dbab0cbafcfb806221411b016a92e93c58 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v6s8t/kubernetes-dashboard id=d728fed4-7bf0-40fd-a8a9-03a88e6b0895 name=/runtime.v1.RuntimeService/StartContainer sandboxID=960e2360dc7c2cb6c6b31cc05d372ad271f1d47661ff91c557a871d7460e3ccd
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.853002745Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3bbe61e6-3341-4b24-80ce-cac86b167177 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.854884734Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9bd6ecd0-2e4e-4a76-a2d9-e69fbf385e6f name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.855858751Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=773f5dac-5da6-4986-8f1a-978c931fdecd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.856007944Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.894822993Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.895012439Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/12bc3f2302b6a87d78efe9c57b55ebbb99e5e213e4177e4420a30df60ed12bf9/merged/etc/passwd: no such file or directory"
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.895048831Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/12bc3f2302b6a87d78efe9c57b55ebbb99e5e213e4177e4420a30df60ed12bf9/merged/etc/group: no such file or directory"
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.89540615Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.939228585Z" level=info msg="Created container e556b09a1766377ab449cade3e737046d611f33f22a514ef8afa52ec096e82fa: kube-system/storage-provisioner/storage-provisioner" id=773f5dac-5da6-4986-8f1a-978c931fdecd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.939958351Z" level=info msg="Starting container: e556b09a1766377ab449cade3e737046d611f33f22a514ef8afa52ec096e82fa" id=9411a07c-4e67-4a1e-90a7-8703bc353930 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.942529944Z" level=info msg="Started container" PID=1741 containerID=e556b09a1766377ab449cade3e737046d611f33f22a514ef8afa52ec096e82fa description=kube-system/storage-provisioner/storage-provisioner id=9411a07c-4e67-4a1e-90a7-8703bc353930 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e05cb90da0cc71f2f72a5039ff731f9be1046b40952ee547ed985403d2317a72
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.715253214Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=67487a26-0477-4e72-8c82-bc968737bd4b name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.716156444Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0b92d8bc-8621-427c-a8a8-365739936ca5 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.717212797Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5/dashboard-metrics-scraper" id=5792e33b-30bc-43d6-849d-68be1477bbcc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.717351318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.722753441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.723221725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.761273685Z" level=info msg="Created container 02e49d47c1f9e63a3a265c977f212e64f614d971a7730fcecb06fa865e3fd2bd: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5/dashboard-metrics-scraper" id=5792e33b-30bc-43d6-849d-68be1477bbcc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.761847493Z" level=info msg="Starting container: 02e49d47c1f9e63a3a265c977f212e64f614d971a7730fcecb06fa865e3fd2bd" id=05fa6e22-79e4-4d3e-bbfa-934b805ea5b3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.763702929Z" level=info msg="Started container" PID=1757 containerID=02e49d47c1f9e63a3a265c977f212e64f614d971a7730fcecb06fa865e3fd2bd description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5/dashboard-metrics-scraper id=05fa6e22-79e4-4d3e-bbfa-934b805ea5b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa307505115a44260998748bbcc026545787b7126bb2aaf1164616ee796ea1b2
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.86182002Z" level=info msg="Removing container: 6528cbb2efb76eb233f386f2a820d40289b92223061828ff768d532889b21a5c" id=d76666bd-45b7-4e53-b18d-607245c38d0e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.874561193Z" level=info msg="Removed container 6528cbb2efb76eb233f386f2a820d40289b92223061828ff768d532889b21a5c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5/dashboard-metrics-scraper" id=d76666bd-45b7-4e53-b18d-607245c38d0e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	02e49d47c1f9e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   aa307505115a4       dashboard-metrics-scraper-5f989dc9cf-cqjl5       kubernetes-dashboard
	e556b09a17663       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   e05cb90da0cc7       storage-provisioner                              kube-system
	9d2477a32ffe8       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   37 seconds ago      Running             kubernetes-dashboard        0                   960e2360dc7c2       kubernetes-dashboard-8694d4445c-v6s8t            kubernetes-dashboard
	545a19c0aceb7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           55 seconds ago      Running             coredns                     0                   47823405d265a       coredns-5dd5756b68-5bgfs                         kube-system
	81342538d388d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   6cde96862115f       busybox                                          default
	bcf75d94a9dc6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   9a08d20d54b37       kindnet-mjzvm                                    kube-system
	ebbd92bd47e4f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   e05cb90da0cc7       storage-provisioner                              kube-system
	540482e832269       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           55 seconds ago      Running             kube-proxy                  0                   2d8c7680218f1       kube-proxy-96xbm                                 kube-system
	42a9a6c58384f       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           59 seconds ago      Running             kube-controller-manager     0                   cd3104eae1688       kube-controller-manager-old-k8s-version-169816   kube-system
	2b36ea96b2622       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           59 seconds ago      Running             etcd                        0                   8e2816a4457dc       etcd-old-k8s-version-169816                      kube-system
	fe1074945f471       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           59 seconds ago      Running             kube-scheduler              0                   1e8adf2fa35fc       kube-scheduler-old-k8s-version-169816            kube-system
	d602ff875b92b       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           59 seconds ago      Running             kube-apiserver              0                   0de44075c4fda       kube-apiserver-old-k8s-version-169816            kube-system
	
	
	==> coredns [545a19c0aceb77a225bc0b4f41cc94737c4b393be192c5431942f1ca5716bb80] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39718 - 61765 "HINFO IN 7045445207232453828.122638483308380106. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.470636794s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-169816
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-169816
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=old-k8s-version-169816
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_11_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:11:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-169816
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:13:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:13:09 +0000   Sun, 09 Nov 2025 14:11:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:13:09 +0000   Sun, 09 Nov 2025 14:11:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:13:09 +0000   Sun, 09 Nov 2025 14:11:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:13:09 +0000   Sun, 09 Nov 2025 14:12:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-169816
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                11632483-d582-4ced-bfcd-ac7706e38a54
	  Boot ID:                    f61b57f8-a461-4dd6-b921-37a11bd88f0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-5bgfs                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-old-k8s-version-169816                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m2s
	  kube-system                 kindnet-mjzvm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-169816             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-old-k8s-version-169816    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-96xbm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-169816             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-cqjl5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-v6s8t             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 55s                  kube-proxy       
	  Normal  Starting                 2m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m7s (x9 over 2m7s)  kubelet          Node old-k8s-version-169816 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-169816 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s (x7 over 2m7s)  kubelet          Node old-k8s-version-169816 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m2s                 kubelet          Node old-k8s-version-169816 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m2s                 kubelet          Node old-k8s-version-169816 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m2s                 kubelet          Node old-k8s-version-169816 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node old-k8s-version-169816 event: Registered Node old-k8s-version-169816 in Controller
	  Normal  NodeReady                95s                  kubelet          Node old-k8s-version-169816 status is now: NodeReady
	  Normal  Starting                 60s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)    kubelet          Node old-k8s-version-169816 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)    kubelet          Node old-k8s-version-169816 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)    kubelet          Node old-k8s-version-169816 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                  node-controller  Node old-k8s-version-169816 event: Registered Node old-k8s-version-169816 in Controller
	
	
	==> dmesg <==
	[  +0.090313] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.571631] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 9 13:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.019189] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023897] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +2.047759] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +4.031604] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +8.447123] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[ +16.382306] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 13:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	
	
	==> etcd [2b36ea96b26225b43c2ec83d436d026e38a7613c24eadfbcb3d971fe39d0671b] <==
	{"level":"info","ts":"2025-11-09T14:12:39.234282Z","caller":"traceutil/trace.go:171","msg":"trace[124771433] transaction","detail":"{read_only:false; response_revision:469; number_of_response:1; }","duration":"108.262954ms","start":"2025-11-09T14:12:39.126004Z","end":"2025-11-09T14:12:39.234267Z","steps":["trace[124771433] 'process raft request'  (duration: 108.211313ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:12:39.661978Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.207618ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356531688323987 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/old-k8s-version-169816.18765c16769cae31\" mod_revision:467 > success:<request_put:<key:\"/registry/events/default/old-k8s-version-169816.18765c16769cae31\" value_size:654 lease:6414984494833548152 >> failure:<request_range:<key:\"/registry/events/default/old-k8s-version-169816.18765c16769cae31\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-09T14:12:39.662166Z","caller":"traceutil/trace.go:171","msg":"trace[1172058608] linearizableReadLoop","detail":"{readStateIndex:495; appliedIndex:493; }","duration":"272.069378ms","start":"2025-11-09T14:12:39.390085Z","end":"2025-11-09T14:12:39.662154Z","steps":["trace[1172058608] 'read index received'  (duration: 34.317457ms)","trace[1172058608] 'applied index is now lower than readState.Index'  (duration: 237.751114ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:12:39.662164Z","caller":"traceutil/trace.go:171","msg":"trace[1996140217] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"360.064314ms","start":"2025-11-09T14:12:39.302075Z","end":"2025-11-09T14:12:39.66214Z","steps":["trace[1996140217] 'process raft request'  (duration: 122.242942ms)","trace[1996140217] 'compare'  (duration: 237.105402ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:12:39.66225Z","caller":"traceutil/trace.go:171","msg":"trace[1439015845] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"358.774934ms","start":"2025-11-09T14:12:39.303459Z","end":"2025-11-09T14:12:39.662234Z","steps":["trace[1439015845] 'process raft request'  (duration: 358.620674ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:12:39.662285Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-09T14:12:39.302059Z","time spent":"360.16749ms","remote":"127.0.0.1:34908","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":736,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/old-k8s-version-169816.18765c16769cae31\" mod_revision:467 > success:<request_put:<key:\"/registry/events/default/old-k8s-version-169816.18765c16769cae31\" value_size:654 lease:6414984494833548152 >> failure:<request_range:<key:\"/registry/events/default/old-k8s-version-169816.18765c16769cae31\" > >"}
	{"level":"warn","ts":"2025-11-09T14:12:39.66234Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.26687ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:721"}
	{"level":"warn","ts":"2025-11-09T14:12:39.662346Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-09T14:12:39.303438Z","time spent":"358.853322ms","remote":"127.0.0.1:35032","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4299,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-old-k8s-version-169816\" mod_revision:320 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-old-k8s-version-169816\" value_size:4227 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-old-k8s-version-169816\" > >"}
	{"level":"info","ts":"2025-11-09T14:12:39.662367Z","caller":"traceutil/trace.go:171","msg":"trace[1058092481] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:474; }","duration":"272.301034ms","start":"2025-11-09T14:12:39.390058Z","end":"2025-11-09T14:12:39.662359Z","steps":["trace[1058092481] 'agreement among raft nodes before linearized reading'  (duration: 272.171009ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:12:39.662462Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.673824ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" ","response":"range_response_count:1 size:442"}
	{"level":"info","ts":"2025-11-09T14:12:39.662488Z","caller":"traceutil/trace.go:171","msg":"trace[440316120] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:474; }","duration":"121.698179ms","start":"2025-11-09T14:12:39.54078Z","end":"2025-11-09T14:12:39.662479Z","steps":["trace[440316120] 'agreement among raft nodes before linearized reading'  (duration: 121.64525ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:12:39.662628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.869337ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" ","response":"range_response_count:65 size:58983"}
	{"level":"info","ts":"2025-11-09T14:12:39.662684Z","caller":"traceutil/trace.go:171","msg":"trace[1664488522] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:65; response_revision:474; }","duration":"121.92896ms","start":"2025-11-09T14:12:39.540746Z","end":"2025-11-09T14:12:39.662675Z","steps":["trace[1664488522] 'agreement among raft nodes before linearized reading'  (duration: 121.51431ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:12:39.975104Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.370931ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356531688324006 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/namespaces/kubernetes-dashboard\" mod_revision:0 > success:<request_put:<key:\"/registry/namespaces/kubernetes-dashboard\" value_size:833 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-09T14:12:39.975257Z","caller":"traceutil/trace.go:171","msg":"trace[1746771980] transaction","detail":"{read_only:false; response_revision:479; number_of_response:1; }","duration":"276.001983ms","start":"2025-11-09T14:12:39.699247Z","end":"2025-11-09T14:12:39.975249Z","steps":["trace[1746771980] 'process raft request'  (duration: 275.936526ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:12:39.975264Z","caller":"traceutil/trace.go:171","msg":"trace[1291155925] transaction","detail":"{read_only:false; response_revision:478; number_of_response:1; }","duration":"282.066489ms","start":"2025-11-09T14:12:39.693172Z","end":"2025-11-09T14:12:39.975238Z","steps":["trace[1291155925] 'process raft request'  (duration: 130.512839ms)","trace[1291155925] 'compare'  (duration: 151.269685ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:12:39.975259Z","caller":"traceutil/trace.go:171","msg":"trace[986209878] linearizableReadLoop","detail":"{readStateIndex:499; appliedIndex:498; }","duration":"280.989839ms","start":"2025-11-09T14:12:39.694251Z","end":"2025-11-09T14:12:39.975241Z","steps":["trace[986209878] 'read index received'  (duration: 129.44398ms)","trace[986209878] 'applied index is now lower than readState.Index'  (duration: 151.543555ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:12:39.975353Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"281.140346ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/old-k8s-version-169816.18765c16769cae31\" ","response":"range_response_count:1 size:751"}
	{"level":"info","ts":"2025-11-09T14:12:39.975492Z","caller":"traceutil/trace.go:171","msg":"trace[1794406723] range","detail":"{range_begin:/registry/events/default/old-k8s-version-169816.18765c16769cae31; range_end:; response_count:1; response_revision:479; }","duration":"281.228776ms","start":"2025-11-09T14:12:39.694197Z","end":"2025-11-09T14:12:39.975425Z","steps":["trace[1794406723] 'agreement among raft nodes before linearized reading'  (duration: 281.081664ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:12:39.975535Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.897299ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:public-info-viewer\" ","response":"range_response_count:1 size:783"}
	{"level":"info","ts":"2025-11-09T14:12:39.975589Z","caller":"traceutil/trace.go:171","msg":"trace[598876810] range","detail":"{range_begin:/registry/clusterrolebindings/system:public-info-viewer; range_end:; response_count:1; response_revision:479; }","duration":"281.0014ms","start":"2025-11-09T14:12:39.694576Z","end":"2025-11-09T14:12:39.975578Z","steps":["trace[598876810] 'agreement among raft nodes before linearized reading'  (duration: 280.862541ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:12:39.97554Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"276.985689ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1122"}
	{"level":"info","ts":"2025-11-09T14:12:39.975797Z","caller":"traceutil/trace.go:171","msg":"trace[646892152] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:479; }","duration":"277.242713ms","start":"2025-11-09T14:12:39.698541Z","end":"2025-11-09T14:12:39.975784Z","steps":["trace[646892152] 'agreement among raft nodes before linearized reading'  (duration: 276.949729ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:21.79142Z","caller":"traceutil/trace.go:171","msg":"trace[2056334423] transaction","detail":"{read_only:false; response_revision:648; number_of_response:1; }","duration":"122.70015ms","start":"2025-11-09T14:13:21.668699Z","end":"2025-11-09T14:13:21.791399Z","steps":["trace[2056334423] 'process raft request'  (duration: 122.486354ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:21.873366Z","caller":"traceutil/trace.go:171","msg":"trace[986599777] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"201.695883ms","start":"2025-11-09T14:13:21.67165Z","end":"2025-11-09T14:13:21.873346Z","steps":["trace[986599777] 'process raft request'  (duration: 201.586469ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:13:35 up 56 min,  0 user,  load average: 2.85, 2.83, 1.85
	Linux old-k8s-version-169816 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bcf75d94a9dc6663fd1f0d1a24e10fdcd1c666fa884d21235773bb0d377856fc] <==
	I1109 14:12:40.263975       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:12:40.264257       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1109 14:12:40.264414       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:12:40.264436       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:12:40.264466       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:12:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:12:40.525995       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:12:40.526104       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:12:40.526124       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:12:40.526327       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1109 14:12:40.826757       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:12:40.826788       1 metrics.go:72] Registering metrics
	I1109 14:12:40.826856       1 controller.go:711] "Syncing nftables rules"
	I1109 14:12:50.526710       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1109 14:12:50.526789       1 main.go:301] handling current node
	I1109 14:13:00.526831       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1109 14:13:00.526885       1 main.go:301] handling current node
	I1109 14:13:10.526869       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1109 14:13:10.526902       1 main.go:301] handling current node
	I1109 14:13:20.527164       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1109 14:13:20.527197       1 main.go:301] handling current node
	I1109 14:13:30.532739       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1109 14:13:30.532785       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d602ff875b92b7937f6fd0b9e58ec36e97373d0bb858bcc87ab19cd3955c7caa] <==
	I1109 14:12:38.639941       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1109 14:12:38.639967       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:12:38.643559       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1109 14:12:38.643594       1 aggregator.go:166] initial CRD sync complete...
	I1109 14:12:38.643605       1 autoregister_controller.go:141] Starting autoregister controller
	I1109 14:12:38.643610       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:12:38.643615       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:12:38.680009       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:12:39.063627       1 trace.go:236] Trace[463023398]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b2dc087e-3739-423c-a867-49b94d07ddd2,client:192.168.76.2,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.28.0 (linux/amd64) kubernetes/855e7c4,verb:POST (09-Nov-2025 14:12:38.558) (total time: 504ms):
	Trace[463023398]: [504.827686ms] [504.827686ms] END
	E1109 14:12:39.107437       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1109 14:12:39.107596       1 trace.go:236] Trace[856774394]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:eee20d90-40b3-4444-bbb2-95f46f406f66,client:192.168.76.2,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes,user-agent:kubelet/v1.28.0 (linux/amd64) kubernetes/855e7c4,verb:POST (09-Nov-2025 14:12:38.560) (total time: 546ms):
	Trace[856774394]: ---"Write to database call failed" len:4049,err:nodes "old-k8s-version-169816" already exists 184ms (14:12:39.107)
	Trace[856774394]: [546.599597ms] [546.599597ms] END
	I1109 14:12:39.665708       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:12:39.690355       1 controller.go:624] quota admission added evaluator for: namespaces
	I1109 14:12:40.029141       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1109 14:12:40.064568       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:12:40.087225       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:12:40.101339       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1109 14:12:40.148476       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.98.248"}
	I1109 14:12:40.164024       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.223.89"}
	I1109 14:12:51.211879       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:12:51.222820       1 controller.go:624] quota admission added evaluator for: endpoints
	I1109 14:12:51.347762       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [42a9a6c58384f12f9ac88b28ed9881f46da5d5ba7cacd3e83d0b643736dfe489] <==
	I1109 14:12:51.350866       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1109 14:12:51.351396       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1109 14:12:51.353692       1 shared_informer.go:318] Caches are synced for disruption
	I1109 14:12:51.358800       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-cqjl5"
	I1109 14:12:51.359031       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-v6s8t"
	I1109 14:12:51.364585       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="13.690806ms"
	I1109 14:12:51.365498       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.01335ms"
	I1109 14:12:51.372289       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.654722ms"
	I1109 14:12:51.372316       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.770592ms"
	I1109 14:12:51.372371       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.125µs"
	I1109 14:12:51.372392       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.943µs"
	I1109 14:12:51.374730       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="35.185µs"
	I1109 14:12:51.382849       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="42.465µs"
	I1109 14:12:51.724961       1 shared_informer.go:318] Caches are synced for garbage collector
	I1109 14:12:51.724987       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1109 14:12:51.736112       1 shared_informer.go:318] Caches are synced for garbage collector
	I1109 14:12:54.825405       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.99µs"
	I1109 14:12:55.823863       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="146.322µs"
	I1109 14:12:56.833690       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="132.578µs"
	I1109 14:12:58.848695       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.166292ms"
	I1109 14:12:58.849035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="78.192µs"
	I1109 14:13:12.871930       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.666µs"
	I1109 14:13:18.816453       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.821579ms"
	I1109 14:13:18.816581       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.442µs"
	I1109 14:13:21.875354       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.804µs"
	
	
	==> kube-proxy [540482e832269305a074529fb0e1c0638067596f1030ebd9cff2130f4a71b8d0] <==
	I1109 14:12:40.120137       1 server_others.go:69] "Using iptables proxy"
	I1109 14:12:40.133619       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1109 14:12:40.159711       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:12:40.162701       1 server_others.go:152] "Using iptables Proxier"
	I1109 14:12:40.162739       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1109 14:12:40.162750       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1109 14:12:40.162797       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1109 14:12:40.163465       1 server.go:846] "Version info" version="v1.28.0"
	I1109 14:12:40.163591       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:12:40.165430       1 config.go:188] "Starting service config controller"
	I1109 14:12:40.165450       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1109 14:12:40.165487       1 config.go:97] "Starting endpoint slice config controller"
	I1109 14:12:40.165492       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1109 14:12:40.164565       1 config.go:315] "Starting node config controller"
	I1109 14:12:40.165518       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1109 14:12:40.265786       1 shared_informer.go:318] Caches are synced for node config
	I1109 14:12:40.265788       1 shared_informer.go:318] Caches are synced for service config
	I1109 14:12:40.265806       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [fe1074945f47108035d7de260124d948b1a6cc022b75173093c351eed9c62fe8] <==
	I1109 14:12:36.638344       1 serving.go:348] Generated self-signed cert in-memory
	W1109 14:12:38.588841       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 14:12:38.588874       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 14:12:38.588890       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 14:12:38.588900       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 14:12:38.615431       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1109 14:12:38.615452       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:12:38.616671       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:12:38.616702       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1109 14:12:38.617558       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1109 14:12:38.617582       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1109 14:12:38.717485       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 09 14:12:51 old-k8s-version-169816 kubelet[715]: I1109 14:12:51.364367     715 topology_manager.go:215] "Topology Admit Handler" podUID="b40e7490-7646-4e1e-a89a-0936a3e8ca71" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-v6s8t"
	Nov 09 14:12:51 old-k8s-version-169816 kubelet[715]: I1109 14:12:51.446942     715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt8m6\" (UniqueName: \"kubernetes.io/projected/aa8a7864-ba51-4e08-88fe-3f4eab718219-kube-api-access-pt8m6\") pod \"dashboard-metrics-scraper-5f989dc9cf-cqjl5\" (UID: \"aa8a7864-ba51-4e08-88fe-3f4eab718219\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5"
	Nov 09 14:12:51 old-k8s-version-169816 kubelet[715]: I1109 14:12:51.446998     715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/aa8a7864-ba51-4e08-88fe-3f4eab718219-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-cqjl5\" (UID: \"aa8a7864-ba51-4e08-88fe-3f4eab718219\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5"
	Nov 09 14:12:51 old-k8s-version-169816 kubelet[715]: I1109 14:12:51.547426     715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b40e7490-7646-4e1e-a89a-0936a3e8ca71-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-v6s8t\" (UID: \"b40e7490-7646-4e1e-a89a-0936a3e8ca71\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v6s8t"
	Nov 09 14:12:51 old-k8s-version-169816 kubelet[715]: I1109 14:12:51.547483     715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tkdr\" (UniqueName: \"kubernetes.io/projected/b40e7490-7646-4e1e-a89a-0936a3e8ca71-kube-api-access-8tkdr\") pod \"kubernetes-dashboard-8694d4445c-v6s8t\" (UID: \"b40e7490-7646-4e1e-a89a-0936a3e8ca71\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v6s8t"
	Nov 09 14:12:54 old-k8s-version-169816 kubelet[715]: I1109 14:12:54.805326     715 scope.go:117] "RemoveContainer" containerID="7458fb4fcd4b802d45cf24db515248f34a335fa27699fa6f8578fdb8297b51d6"
	Nov 09 14:12:55 old-k8s-version-169816 kubelet[715]: I1109 14:12:55.809866     715 scope.go:117] "RemoveContainer" containerID="7458fb4fcd4b802d45cf24db515248f34a335fa27699fa6f8578fdb8297b51d6"
	Nov 09 14:12:55 old-k8s-version-169816 kubelet[715]: I1109 14:12:55.810069     715 scope.go:117] "RemoveContainer" containerID="6528cbb2efb76eb233f386f2a820d40289b92223061828ff768d532889b21a5c"
	Nov 09 14:12:55 old-k8s-version-169816 kubelet[715]: E1109 14:12:55.810482     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-cqjl5_kubernetes-dashboard(aa8a7864-ba51-4e08-88fe-3f4eab718219)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5" podUID="aa8a7864-ba51-4e08-88fe-3f4eab718219"
	Nov 09 14:12:56 old-k8s-version-169816 kubelet[715]: I1109 14:12:56.816522     715 scope.go:117] "RemoveContainer" containerID="6528cbb2efb76eb233f386f2a820d40289b92223061828ff768d532889b21a5c"
	Nov 09 14:12:56 old-k8s-version-169816 kubelet[715]: E1109 14:12:56.817298     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-cqjl5_kubernetes-dashboard(aa8a7864-ba51-4e08-88fe-3f4eab718219)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5" podUID="aa8a7864-ba51-4e08-88fe-3f4eab718219"
	Nov 09 14:12:58 old-k8s-version-169816 kubelet[715]: I1109 14:12:58.836347     715 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v6s8t" podStartSLOduration=1.296745624 podCreationTimestamp="2025-11-09 14:12:51 +0000 UTC" firstStartedPulling="2025-11-09 14:12:51.687535591 +0000 UTC m=+16.072416886" lastFinishedPulling="2025-11-09 14:12:58.227076979 +0000 UTC m=+22.611958271" observedRunningTime="2025-11-09 14:12:58.835691575 +0000 UTC m=+23.220572878" watchObservedRunningTime="2025-11-09 14:12:58.836287009 +0000 UTC m=+23.221168313"
	Nov 09 14:13:01 old-k8s-version-169816 kubelet[715]: I1109 14:13:01.665121     715 scope.go:117] "RemoveContainer" containerID="6528cbb2efb76eb233f386f2a820d40289b92223061828ff768d532889b21a5c"
	Nov 09 14:13:01 old-k8s-version-169816 kubelet[715]: E1109 14:13:01.665467     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-cqjl5_kubernetes-dashboard(aa8a7864-ba51-4e08-88fe-3f4eab718219)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5" podUID="aa8a7864-ba51-4e08-88fe-3f4eab718219"
	Nov 09 14:13:10 old-k8s-version-169816 kubelet[715]: I1109 14:13:10.852521     715 scope.go:117] "RemoveContainer" containerID="ebbd92bd47e4fe8bee5069fb179364ea166dbcf2c0168bd25cf3692734490d1b"
	Nov 09 14:13:12 old-k8s-version-169816 kubelet[715]: I1109 14:13:12.714716     715 scope.go:117] "RemoveContainer" containerID="6528cbb2efb76eb233f386f2a820d40289b92223061828ff768d532889b21a5c"
	Nov 09 14:13:12 old-k8s-version-169816 kubelet[715]: I1109 14:13:12.860668     715 scope.go:117] "RemoveContainer" containerID="6528cbb2efb76eb233f386f2a820d40289b92223061828ff768d532889b21a5c"
	Nov 09 14:13:12 old-k8s-version-169816 kubelet[715]: I1109 14:13:12.860924     715 scope.go:117] "RemoveContainer" containerID="02e49d47c1f9e63a3a265c977f212e64f614d971a7730fcecb06fa865e3fd2bd"
	Nov 09 14:13:12 old-k8s-version-169816 kubelet[715]: E1109 14:13:12.861283     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-cqjl5_kubernetes-dashboard(aa8a7864-ba51-4e08-88fe-3f4eab718219)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5" podUID="aa8a7864-ba51-4e08-88fe-3f4eab718219"
	Nov 09 14:13:21 old-k8s-version-169816 kubelet[715]: I1109 14:13:21.665435     715 scope.go:117] "RemoveContainer" containerID="02e49d47c1f9e63a3a265c977f212e64f614d971a7730fcecb06fa865e3fd2bd"
	Nov 09 14:13:21 old-k8s-version-169816 kubelet[715]: E1109 14:13:21.665834     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-cqjl5_kubernetes-dashboard(aa8a7864-ba51-4e08-88fe-3f4eab718219)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5" podUID="aa8a7864-ba51-4e08-88fe-3f4eab718219"
	Nov 09 14:13:32 old-k8s-version-169816 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:13:32 old-k8s-version-169816 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:13:32 old-k8s-version-169816 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 09 14:13:32 old-k8s-version-169816 systemd[1]: kubelet.service: Consumed 1.513s CPU time.
	
	
	==> kubernetes-dashboard [9d2477a32ffe81fe2b753f7e91fc77dbab0cbafcfb806221411b016a92e93c58] <==
	2025/11/09 14:12:58 Starting overwatch
	2025/11/09 14:12:58 Using namespace: kubernetes-dashboard
	2025/11/09 14:12:58 Using in-cluster config to connect to apiserver
	2025/11/09 14:12:58 Using secret token for csrf signing
	2025/11/09 14:12:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/09 14:12:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/09 14:12:58 Successful initial request to the apiserver, version: v1.28.0
	2025/11/09 14:12:58 Generating JWE encryption key
	2025/11/09 14:12:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/09 14:12:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/09 14:12:58 Initializing JWE encryption key from synchronized object
	2025/11/09 14:12:58 Creating in-cluster Sidecar client
	2025/11/09 14:12:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:12:58 Serving insecurely on HTTP port: 9090
	2025/11/09 14:13:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [e556b09a1766377ab449cade3e737046d611f33f22a514ef8afa52ec096e82fa] <==
	I1109 14:13:10.953956       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:13:10.962481       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:13:10.962525       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1109 14:13:28.385210       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:13:28.385401       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-169816_41dc1996-a973-44fe-b7b1-06181a889cfb!
	I1109 14:13:28.385730       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c4c27452-9f34-4b03-8815-bd5ff2390444", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-169816_41dc1996-a973-44fe-b7b1-06181a889cfb became leader
	I1109 14:13:28.485951       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-169816_41dc1996-a973-44fe-b7b1-06181a889cfb!
	
	
	==> storage-provisioner [ebbd92bd47e4fe8bee5069fb179364ea166dbcf2c0168bd25cf3692734490d1b] <==
	I1109 14:12:40.085226       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1109 14:13:10.087787       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-169816 -n old-k8s-version-169816
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-169816 -n old-k8s-version-169816: exit status 2 (326.446191ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-169816 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-169816
helpers_test.go:243: (dbg) docker inspect old-k8s-version-169816:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9",
	        "Created": "2025-11-09T14:11:19.114933288Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 244313,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:12:29.318972385Z",
	            "FinishedAt": "2025-11-09T14:12:28.373951801Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9/hostname",
	        "HostsPath": "/var/lib/docker/containers/7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9/hosts",
	        "LogPath": "/var/lib/docker/containers/7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9/7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9-json.log",
	        "Name": "/old-k8s-version-169816",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-169816:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-169816",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7b32476bd090034f796daee8335ca8fe70316129a4a3ddb33b11b7514f7880a9",
	                "LowerDir": "/var/lib/docker/overlay2/a9f911d75fdbe90920747d9ae87079b6baa03d141bcceb4514f616f7af7bafda-init/diff:/var/lib/docker/overlay2/d0c6d4b4ffbfb6af64ec3408030fc54111fe30d500088a5ae947d598cfc72f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a9f911d75fdbe90920747d9ae87079b6baa03d141bcceb4514f616f7af7bafda/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a9f911d75fdbe90920747d9ae87079b6baa03d141bcceb4514f616f7af7bafda/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a9f911d75fdbe90920747d9ae87079b6baa03d141bcceb4514f616f7af7bafda/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-169816",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-169816/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-169816",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-169816",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-169816",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "441c3d481fe83d1019c0e25bc13ee7db4af53af94fa590af3a2e1e1a84db4724",
	            "SandboxKey": "/var/run/docker/netns/441c3d481fe8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-169816": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:96:0d:f5:e1:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f0ef03f929b33a2352ddcf362b70e81410120fda868115e956b4bb456ca7cf63",
	                    "EndpointID": "35113df863827a82e0c3c7352c8d866c41e41f82ae88f0ace807f6896242b37c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-169816",
	                        "7b32476bd090"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169816 -n old-k8s-version-169816
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169816 -n old-k8s-version-169816: exit status 2 (330.754446ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-169816 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-169816 logs -n 25: (1.196830387s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p pause-092489 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-092489                 │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ start   │ -p old-k8s-version-169816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:12 UTC │
	│ pause   │ -p pause-092489 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-092489                 │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │                     │
	│ delete  │ -p pause-092489                                                                                                                                                                                                                               │ pause-092489                 │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:11 UTC │
	│ start   │ -p no-preload-152932 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:11 UTC │ 09 Nov 25 14:12 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-169816 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │                     │
	│ stop    │ -p old-k8s-version-169816 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p cert-expiration-883873 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-883873       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-152932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-169816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p old-k8s-version-169816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:13 UTC │
	│ stop    │ -p no-preload-152932 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ delete  │ -p cert-expiration-883873                                                                                                                                                                                                                     │ cert-expiration-883873       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p embed-certs-273180 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:13 UTC │
	│ addons  │ enable dashboard -p no-preload-152932 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p no-preload-152932 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p kubernetes-upgrade-755159 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-755159    │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ start   │ -p kubernetes-upgrade-755159 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-755159    │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p kubernetes-upgrade-755159                                                                                                                                                                                                                  │ kubernetes-upgrade-755159    │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p disable-driver-mounts-565545                                                                                                                                                                                                               │ disable-driver-mounts-565545 │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p default-k8s-diff-port-326524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-273180 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ stop    │ -p embed-certs-273180 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ image   │ old-k8s-version-169816 image list --format=json                                                                                                                                                                                               │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ pause   │ -p old-k8s-version-169816 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:13:17
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:13:17.185719  256773 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:13:17.185955  256773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:13:17.185962  256773 out.go:374] Setting ErrFile to fd 2...
	I1109 14:13:17.185966  256773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:13:17.186158  256773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:13:17.186568  256773 out.go:368] Setting JSON to false
	I1109 14:13:17.187686  256773 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3347,"bootTime":1762694250,"procs":356,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:13:17.187762  256773 start.go:143] virtualization: kvm guest
	I1109 14:13:17.189520  256773 out.go:179] * [default-k8s-diff-port-326524] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:13:17.191078  256773 notify.go:221] Checking for updates...
	I1109 14:13:17.191098  256773 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:13:17.192262  256773 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:13:17.193437  256773 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:13:17.194507  256773 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 14:13:17.195596  256773 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:13:17.196680  256773 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:13:17.198200  256773 config.go:182] Loaded profile config "embed-certs-273180": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:13:17.198297  256773 config.go:182] Loaded profile config "no-preload-152932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:13:17.198363  256773 config.go:182] Loaded profile config "old-k8s-version-169816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1109 14:13:17.198433  256773 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:13:17.223091  256773 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 14:13:17.223203  256773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:13:17.287312  256773 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-09 14:13:17.275776886 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:13:17.287415  256773 docker.go:319] overlay module found
	I1109 14:13:17.289116  256773 out.go:179] * Using the docker driver based on user configuration
	I1109 14:13:17.290229  256773 start.go:309] selected driver: docker
	I1109 14:13:17.290242  256773 start.go:930] validating driver "docker" against <nil>
	I1109 14:13:17.290253  256773 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:13:17.290851  256773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:13:17.343940  256773 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-09 14:13:17.334239049 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:13:17.344115  256773 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 14:13:17.344374  256773 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:13:17.345927  256773 out.go:179] * Using Docker driver with root privileges
	I1109 14:13:17.347106  256773 cni.go:84] Creating CNI manager for ""
	I1109 14:13:17.347162  256773 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:13:17.347171  256773 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 14:13:17.347232  256773 start.go:353] cluster config:
	{Name:default-k8s-diff-port-326524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-326524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:13:17.348420  256773 out.go:179] * Starting "default-k8s-diff-port-326524" primary control-plane node in "default-k8s-diff-port-326524" cluster
	I1109 14:13:17.349353  256773 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:13:17.350328  256773 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:13:17.351300  256773 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:13:17.351333  256773 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 14:13:17.351341  256773 cache.go:65] Caching tarball of preloaded images
	I1109 14:13:17.351378  256773 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:13:17.351457  256773 preload.go:238] Found /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 14:13:17.351473  256773 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:13:17.351571  256773 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/config.json ...
	I1109 14:13:17.351595  256773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/config.json: {Name:mk6fab699afd6d53f2fdcb141a735fa8da65c44d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:17.370665  256773 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:13:17.370687  256773 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:13:17.370711  256773 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:13:17.370737  256773 start.go:360] acquireMachinesLock for default-k8s-diff-port-326524: {Name:mk380b0156a652cb7885053d4cba5ab348316819 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:13:17.370865  256773 start.go:364] duration metric: took 106.738µs to acquireMachinesLock for "default-k8s-diff-port-326524"
	I1109 14:13:17.370892  256773 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-326524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-326524 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:13:17.370981  256773 start.go:125] createHost starting for "" (driver="docker")
	W1109 14:13:15.652225  246717 node_ready.go:57] node "embed-certs-273180" has "Ready":"False" status (will retry)
	I1109 14:13:16.152405  246717 node_ready.go:49] node "embed-certs-273180" is "Ready"
	I1109 14:13:16.152430  246717 node_ready.go:38] duration metric: took 12.503394947s for node "embed-certs-273180" to be "Ready" ...
	I1109 14:13:16.152444  246717 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:13:16.152482  246717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:13:16.165139  246717 api_server.go:72] duration metric: took 12.905051503s to wait for apiserver process to appear ...
	I1109 14:13:16.165160  246717 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:13:16.165174  246717 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1109 14:13:16.169192  246717 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1109 14:13:16.170050  246717 api_server.go:141] control plane version: v1.34.1
	I1109 14:13:16.170071  246717 api_server.go:131] duration metric: took 4.906156ms to wait for apiserver health ...
	I1109 14:13:16.170079  246717 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:13:16.174346  246717 system_pods.go:59] 8 kube-system pods found
	I1109 14:13:16.174385  246717 system_pods.go:61] "coredns-66bc5c9577-bbnm4" [b6f42679-62a3-4b25-9119-c08fe6b07c0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:13:16.174393  246717 system_pods.go:61] "etcd-embed-certs-273180" [bbb903eb-c06b-4c1e-948e-3f5db3af34b4] Running
	I1109 14:13:16.174402  246717 system_pods.go:61] "kindnet-scgq8" [5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba] Running
	I1109 14:13:16.174413  246717 system_pods.go:61] "kube-apiserver-embed-certs-273180" [0f55bcc7-9c38-4bb2-96b6-ff2012e9d407] Running
	I1109 14:13:16.174419  246717 system_pods.go:61] "kube-controller-manager-embed-certs-273180" [c03a3fa3-7a12-47a2-b628-eae16a25cfb8] Running
	I1109 14:13:16.174424  246717 system_pods.go:61] "kube-proxy-k6zsl" [aa0ed3ae-34a8-4368-8e1c-385033e46f0e] Running
	I1109 14:13:16.174429  246717 system_pods.go:61] "kube-scheduler-embed-certs-273180" [68f3160d-05e0-491f-8254-44379226803a] Running
	I1109 14:13:16.174436  246717 system_pods.go:61] "storage-provisioner" [d9104f3d-417a-49dc-86ba-af31925458bb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:13:16.174447  246717 system_pods.go:74] duration metric: took 4.362755ms to wait for pod list to return data ...
	I1109 14:13:16.174463  246717 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:13:16.176894  246717 default_sa.go:45] found service account: "default"
	I1109 14:13:16.176922  246717 default_sa.go:55] duration metric: took 2.451484ms for default service account to be created ...
	I1109 14:13:16.176931  246717 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:13:16.181020  246717 system_pods.go:86] 8 kube-system pods found
	I1109 14:13:16.181045  246717 system_pods.go:89] "coredns-66bc5c9577-bbnm4" [b6f42679-62a3-4b25-9119-c08fe6b07c0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:13:16.181052  246717 system_pods.go:89] "etcd-embed-certs-273180" [bbb903eb-c06b-4c1e-948e-3f5db3af34b4] Running
	I1109 14:13:16.181060  246717 system_pods.go:89] "kindnet-scgq8" [5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba] Running
	I1109 14:13:16.181064  246717 system_pods.go:89] "kube-apiserver-embed-certs-273180" [0f55bcc7-9c38-4bb2-96b6-ff2012e9d407] Running
	I1109 14:13:16.181071  246717 system_pods.go:89] "kube-controller-manager-embed-certs-273180" [c03a3fa3-7a12-47a2-b628-eae16a25cfb8] Running
	I1109 14:13:16.181075  246717 system_pods.go:89] "kube-proxy-k6zsl" [aa0ed3ae-34a8-4368-8e1c-385033e46f0e] Running
	I1109 14:13:16.181081  246717 system_pods.go:89] "kube-scheduler-embed-certs-273180" [68f3160d-05e0-491f-8254-44379226803a] Running
	I1109 14:13:16.181093  246717 system_pods.go:89] "storage-provisioner" [d9104f3d-417a-49dc-86ba-af31925458bb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:13:16.181114  246717 retry.go:31] will retry after 197.947699ms: missing components: kube-dns
	I1109 14:13:16.384079  246717 system_pods.go:86] 8 kube-system pods found
	I1109 14:13:16.384113  246717 system_pods.go:89] "coredns-66bc5c9577-bbnm4" [b6f42679-62a3-4b25-9119-c08fe6b07c0c] Running
	I1109 14:13:16.384119  246717 system_pods.go:89] "etcd-embed-certs-273180" [bbb903eb-c06b-4c1e-948e-3f5db3af34b4] Running
	I1109 14:13:16.384123  246717 system_pods.go:89] "kindnet-scgq8" [5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba] Running
	I1109 14:13:16.384127  246717 system_pods.go:89] "kube-apiserver-embed-certs-273180" [0f55bcc7-9c38-4bb2-96b6-ff2012e9d407] Running
	I1109 14:13:16.384134  246717 system_pods.go:89] "kube-controller-manager-embed-certs-273180" [c03a3fa3-7a12-47a2-b628-eae16a25cfb8] Running
	I1109 14:13:16.384143  246717 system_pods.go:89] "kube-proxy-k6zsl" [aa0ed3ae-34a8-4368-8e1c-385033e46f0e] Running
	I1109 14:13:16.384148  246717 system_pods.go:89] "kube-scheduler-embed-certs-273180" [68f3160d-05e0-491f-8254-44379226803a] Running
	I1109 14:13:16.384153  246717 system_pods.go:89] "storage-provisioner" [d9104f3d-417a-49dc-86ba-af31925458bb] Running
	I1109 14:13:16.384163  246717 system_pods.go:126] duration metric: took 207.224839ms to wait for k8s-apps to be running ...
	I1109 14:13:16.384180  246717 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:13:16.384240  246717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:13:16.399990  246717 system_svc.go:56] duration metric: took 15.781627ms WaitForService to wait for kubelet
	I1109 14:13:16.400025  246717 kubeadm.go:587] duration metric: took 13.139938623s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:13:16.400050  246717 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:13:16.403106  246717 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:13:16.403135  246717 node_conditions.go:123] node cpu capacity is 8
	I1109 14:13:16.403153  246717 node_conditions.go:105] duration metric: took 3.089934ms to run NodePressure ...
	I1109 14:13:16.403168  246717 start.go:242] waiting for startup goroutines ...
	I1109 14:13:16.403182  246717 start.go:247] waiting for cluster config update ...
	I1109 14:13:16.403195  246717 start.go:256] writing updated cluster config ...
	I1109 14:13:16.403401  246717 ssh_runner.go:195] Run: rm -f paused
	I1109 14:13:16.407239  246717 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:13:16.410710  246717 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bbnm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.414784  246717 pod_ready.go:94] pod "coredns-66bc5c9577-bbnm4" is "Ready"
	I1109 14:13:16.414802  246717 pod_ready.go:86] duration metric: took 4.066203ms for pod "coredns-66bc5c9577-bbnm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.416531  246717 pod_ready.go:83] waiting for pod "etcd-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.420036  246717 pod_ready.go:94] pod "etcd-embed-certs-273180" is "Ready"
	I1109 14:13:16.420052  246717 pod_ready.go:86] duration metric: took 3.498205ms for pod "etcd-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.422008  246717 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.425498  246717 pod_ready.go:94] pod "kube-apiserver-embed-certs-273180" is "Ready"
	I1109 14:13:16.425518  246717 pod_ready.go:86] duration metric: took 3.492681ms for pod "kube-apiserver-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.427143  246717 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:16.811543  246717 pod_ready.go:94] pod "kube-controller-manager-embed-certs-273180" is "Ready"
	I1109 14:13:16.811564  246717 pod_ready.go:86] duration metric: took 384.404326ms for pod "kube-controller-manager-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:17.011653  246717 pod_ready.go:83] waiting for pod "kube-proxy-k6zsl" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:17.412029  246717 pod_ready.go:94] pod "kube-proxy-k6zsl" is "Ready"
	I1109 14:13:17.412057  246717 pod_ready.go:86] duration metric: took 400.379485ms for pod "kube-proxy-k6zsl" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:17.612189  246717 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:18.011980  246717 pod_ready.go:94] pod "kube-scheduler-embed-certs-273180" is "Ready"
	I1109 14:13:18.012004  246717 pod_ready.go:86] duration metric: took 399.78913ms for pod "kube-scheduler-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:18.012015  246717 pod_ready.go:40] duration metric: took 1.604746997s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:13:18.062408  246717 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:13:18.063827  246717 out.go:179] * Done! kubectl is now configured to use "embed-certs-273180" cluster and "default" namespace by default
	W1109 14:13:14.734612  243958 pod_ready.go:104] pod "coredns-5dd5756b68-5bgfs" is not "Ready", error: <nil>
	W1109 14:13:16.735051  243958 pod_ready.go:104] pod "coredns-5dd5756b68-5bgfs" is not "Ready", error: <nil>
	W1109 14:13:15.269842  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	W1109 14:13:17.273879  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	I1109 14:13:19.234767  243958 pod_ready.go:94] pod "coredns-5dd5756b68-5bgfs" is "Ready"
	I1109 14:13:19.234799  243958 pod_ready.go:86] duration metric: took 38.505686458s for pod "coredns-5dd5756b68-5bgfs" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.237750  243958 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.242233  243958 pod_ready.go:94] pod "etcd-old-k8s-version-169816" is "Ready"
	I1109 14:13:19.242258  243958 pod_ready.go:86] duration metric: took 4.482172ms for pod "etcd-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.245330  243958 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.249728  243958 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-169816" is "Ready"
	I1109 14:13:19.249746  243958 pod_ready.go:86] duration metric: took 4.394681ms for pod "kube-apiserver-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.252151  243958 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.433101  243958 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-169816" is "Ready"
	I1109 14:13:19.433127  243958 pod_ready.go:86] duration metric: took 180.958702ms for pod "kube-controller-manager-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:19.634032  243958 pod_ready.go:83] waiting for pod "kube-proxy-96xbm" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:20.033559  243958 pod_ready.go:94] pod "kube-proxy-96xbm" is "Ready"
	I1109 14:13:20.033591  243958 pod_ready.go:86] duration metric: took 399.53199ms for pod "kube-proxy-96xbm" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:20.233541  243958 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:20.632834  243958 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-169816" is "Ready"
	I1109 14:13:20.632865  243958 pod_ready.go:86] duration metric: took 399.296239ms for pod "kube-scheduler-old-k8s-version-169816" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:20.632880  243958 pod_ready.go:40] duration metric: took 39.910042807s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:13:20.676081  243958 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1109 14:13:20.700243  243958 out.go:203] 
	W1109 14:13:20.702346  243958 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1109 14:13:20.704036  243958 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1109 14:13:20.709285  243958 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-169816" cluster and "default" namespace by default
	I1109 14:13:17.373015  256773 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1109 14:13:17.373227  256773 start.go:159] libmachine.API.Create for "default-k8s-diff-port-326524" (driver="docker")
	I1109 14:13:17.373253  256773 client.go:173] LocalClient.Create starting
	I1109 14:13:17.373339  256773 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem
	I1109 14:13:17.373379  256773 main.go:143] libmachine: Decoding PEM data...
	I1109 14:13:17.373402  256773 main.go:143] libmachine: Parsing certificate...
	I1109 14:13:17.373471  256773 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem
	I1109 14:13:17.373499  256773 main.go:143] libmachine: Decoding PEM data...
	I1109 14:13:17.373516  256773 main.go:143] libmachine: Parsing certificate...
	I1109 14:13:17.373944  256773 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-326524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 14:13:17.390599  256773 cli_runner.go:211] docker network inspect default-k8s-diff-port-326524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 14:13:17.390692  256773 network_create.go:284] running [docker network inspect default-k8s-diff-port-326524] to gather additional debugging logs...
	I1109 14:13:17.390717  256773 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-326524
	W1109 14:13:17.406825  256773 cli_runner.go:211] docker network inspect default-k8s-diff-port-326524 returned with exit code 1
	I1109 14:13:17.406849  256773 network_create.go:287] error running [docker network inspect default-k8s-diff-port-326524]: docker network inspect default-k8s-diff-port-326524: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-326524 not found
	I1109 14:13:17.406863  256773 network_create.go:289] output of [docker network inspect default-k8s-diff-port-326524]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-326524 not found
	
	** /stderr **
	I1109 14:13:17.406974  256773 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:13:17.424550  256773 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c085420cd01a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:dd:1a:ae:de:73} reservation:<nil>}
	I1109 14:13:17.425251  256773 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-227a1511ff5a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4a:00:84:88:a9:17} reservation:<nil>}
	I1109 14:13:17.425985  256773 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9196665a99b4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:6c:f7:d0:28:4f} reservation:<nil>}
	I1109 14:13:17.426428  256773 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f0ef03f929b3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e6:cd:f4:b2:ad:24} reservation:<nil>}
	I1109 14:13:17.427179  256773 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f05c30}
	I1109 14:13:17.427204  256773 network_create.go:124] attempt to create docker network default-k8s-diff-port-326524 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1109 14:13:17.427254  256773 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-326524 default-k8s-diff-port-326524
	I1109 14:13:17.482890  256773 network_create.go:108] docker network default-k8s-diff-port-326524 192.168.85.0/24 created
	I1109 14:13:17.482919  256773 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-326524" container
	I1109 14:13:17.482985  256773 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 14:13:17.499994  256773 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-326524 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-326524 --label created_by.minikube.sigs.k8s.io=true
	I1109 14:13:17.516895  256773 oci.go:103] Successfully created a docker volume default-k8s-diff-port-326524
	I1109 14:13:17.516975  256773 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-326524-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-326524 --entrypoint /usr/bin/test -v default-k8s-diff-port-326524:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 14:13:17.902500  256773 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-326524
	I1109 14:13:17.902548  256773 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:13:17.902557  256773 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 14:13:17.902632  256773 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-326524:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1109 14:13:19.770606  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	W1109 14:13:21.877241  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	W1109 14:13:24.270217  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	I1109 14:13:22.272330  256773 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-326524:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.369635445s)
	I1109 14:13:22.272359  256773 kic.go:203] duration metric: took 4.369799264s to extract preloaded images to volume ...
	W1109 14:13:22.272424  256773 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1109 14:13:22.272451  256773 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1109 14:13:22.272482  256773 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 14:13:22.331245  256773 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-326524 --name default-k8s-diff-port-326524 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-326524 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-326524 --network default-k8s-diff-port-326524 --ip 192.168.85.2 --volume default-k8s-diff-port-326524:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 14:13:22.644835  256773 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-326524 --format={{.State.Running}}
	I1109 14:13:22.662519  256773 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-326524 --format={{.State.Status}}
	I1109 14:13:22.679685  256773 cli_runner.go:164] Run: docker exec default-k8s-diff-port-326524 stat /var/lib/dpkg/alternatives/iptables
	I1109 14:13:22.729821  256773 oci.go:144] the created container "default-k8s-diff-port-326524" has a running status.
	I1109 14:13:22.729857  256773 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa...
	I1109 14:13:22.900781  256773 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 14:13:22.928554  256773 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-326524 --format={{.State.Status}}
	I1109 14:13:22.945568  256773 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 14:13:22.945590  256773 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-326524 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 14:13:22.991620  256773 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-326524 --format={{.State.Status}}
	I1109 14:13:23.008209  256773 machine.go:94] provisionDockerMachine start ...
	I1109 14:13:23.008279  256773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:13:23.024389  256773 main.go:143] libmachine: Using SSH client type: native
	I1109 14:13:23.024609  256773 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1109 14:13:23.024621  256773 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:13:23.025379  256773 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42224->127.0.0.1:33080: read: connection reset by peer
	I1109 14:13:26.154226  256773 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-326524
	
	I1109 14:13:26.154256  256773 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-326524"
	I1109 14:13:26.154315  256773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:13:26.172582  256773 main.go:143] libmachine: Using SSH client type: native
	I1109 14:13:26.172818  256773 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1109 14:13:26.172834  256773 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-326524 && echo "default-k8s-diff-port-326524" | sudo tee /etc/hostname
	I1109 14:13:26.323588  256773 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-326524
	
	I1109 14:13:26.323708  256773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:13:26.342328  256773 main.go:143] libmachine: Using SSH client type: native
	I1109 14:13:26.342547  256773 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1109 14:13:26.342576  256773 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-326524' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-326524/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-326524' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:13:26.473700  256773 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:13:26.473727  256773 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-5854/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-5854/.minikube}
	I1109 14:13:26.473755  256773 ubuntu.go:190] setting up certificates
	I1109 14:13:26.473763  256773 provision.go:84] configureAuth start
	I1109 14:13:26.473804  256773 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-326524
	I1109 14:13:26.492949  256773 provision.go:143] copyHostCerts
	I1109 14:13:26.493003  256773 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem, removing ...
	I1109 14:13:26.493012  256773 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem
	I1109 14:13:26.493072  256773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem (1123 bytes)
	I1109 14:13:26.493164  256773 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem, removing ...
	I1109 14:13:26.493173  256773 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem
	I1109 14:13:26.493202  256773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem (1679 bytes)
	I1109 14:13:26.493263  256773 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem, removing ...
	I1109 14:13:26.493280  256773 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem
	I1109 14:13:26.493313  256773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem (1078 bytes)
	I1109 14:13:26.493379  256773 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-326524 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-326524 localhost minikube]
	I1109 14:13:26.896797  256773 provision.go:177] copyRemoteCerts
	I1109 14:13:26.896873  256773 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:13:26.896917  256773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:13:26.917511  256773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa Username:docker}
	I1109 14:13:27.011768  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 14:13:27.031249  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1109 14:13:27.048800  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:13:27.067605  256773 provision.go:87] duration metric: took 593.830868ms to configureAuth
	I1109 14:13:27.067633  256773 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:13:27.067852  256773 config.go:182] Loaded profile config "default-k8s-diff-port-326524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:13:27.067992  256773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:13:27.092362  256773 main.go:143] libmachine: Using SSH client type: native
	I1109 14:13:27.092611  256773 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1109 14:13:27.092627  256773 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:13:27.334264  256773 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:13:27.334299  256773 machine.go:97] duration metric: took 4.3260586s to provisionDockerMachine
	I1109 14:13:27.334311  256773 client.go:176] duration metric: took 9.961052551s to LocalClient.Create
	I1109 14:13:27.334338  256773 start.go:167] duration metric: took 9.961109004s to libmachine.API.Create "default-k8s-diff-port-326524"
	I1109 14:13:27.334352  256773 start.go:293] postStartSetup for "default-k8s-diff-port-326524" (driver="docker")
	I1109 14:13:27.334371  256773 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:13:27.334447  256773 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:13:27.334495  256773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:13:27.353631  256773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa Username:docker}
	I1109 14:13:27.449797  256773 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:13:27.453262  256773 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:13:27.453288  256773 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:13:27.453298  256773 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/addons for local assets ...
	I1109 14:13:27.453347  256773 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/files for local assets ...
	I1109 14:13:27.453438  256773 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem -> 93652.pem in /etc/ssl/certs
	I1109 14:13:27.453546  256773 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:13:27.460689  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:13:27.480051  256773 start.go:296] duration metric: took 145.682575ms for postStartSetup
	I1109 14:13:27.480484  256773 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-326524
	I1109 14:13:27.501511  256773 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/config.json ...
	I1109 14:13:27.501851  256773 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:13:27.501906  256773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:13:27.519345  256773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa Username:docker}
	I1109 14:13:27.611382  256773 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:13:27.616531  256773 start.go:128] duration metric: took 10.245537648s to createHost
	I1109 14:13:27.616556  256773 start.go:83] releasing machines lock for "default-k8s-diff-port-326524", held for 10.245678097s
	I1109 14:13:27.616620  256773 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-326524
	I1109 14:13:27.634797  256773 ssh_runner.go:195] Run: cat /version.json
	I1109 14:13:27.634837  256773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:13:27.634876  256773 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:13:27.634952  256773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:13:27.654151  256773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa Username:docker}
	I1109 14:13:27.654731  256773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa Username:docker}
	I1109 14:13:27.745497  256773 ssh_runner.go:195] Run: systemctl --version
	I1109 14:13:27.822905  256773 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:13:27.858502  256773 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:13:27.862875  256773 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:13:27.862940  256773 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:13:27.887767  256773 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1109 14:13:27.887786  256773 start.go:496] detecting cgroup driver to use...
	I1109 14:13:27.887819  256773 detect.go:190] detected "systemd" cgroup driver on host os
	I1109 14:13:27.887869  256773 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:13:27.906100  256773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:13:27.923555  256773 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:13:27.923617  256773 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:13:27.941996  256773 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:13:27.960076  256773 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:13:28.048743  256773 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:13:28.131841  256773 docker.go:234] disabling docker service ...
	I1109 14:13:28.131916  256773 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:13:28.150149  256773 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:13:28.165388  256773 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:13:28.258358  256773 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:13:28.348537  256773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:13:28.361622  256773 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:13:28.376079  256773 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:13:28.376146  256773 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:13:28.387343  256773 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1109 14:13:28.387397  256773 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:13:28.397407  256773 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:13:28.407515  256773 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:13:28.416541  256773 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:13:28.424557  256773 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:13:28.433116  256773 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:13:28.445980  256773 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:13:28.453875  256773 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:13:28.461480  256773 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:13:28.468492  256773 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:13:28.546491  256773 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:13:28.658711  256773 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:13:28.658780  256773 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:13:28.662862  256773 start.go:564] Will wait 60s for crictl version
	I1109 14:13:28.662929  256773 ssh_runner.go:195] Run: which crictl
	I1109 14:13:28.666513  256773 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:13:28.690044  256773 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:13:28.690112  256773 ssh_runner.go:195] Run: crio --version
	I1109 14:13:28.716709  256773 ssh_runner.go:195] Run: crio --version
	I1109 14:13:28.743954  256773 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1109 14:13:26.271214  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	W1109 14:13:28.770277  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	I1109 14:13:28.744983  256773 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-326524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:13:28.762298  256773 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1109 14:13:28.766225  256773 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:13:28.776502  256773 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-326524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-326524 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:13:28.776591  256773 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:13:28.776632  256773 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:13:28.806130  256773 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:13:28.806147  256773 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:13:28.806184  256773 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:13:28.829283  256773 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:13:28.829302  256773 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:13:28.829309  256773 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1109 14:13:28.829391  256773 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-326524 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-326524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:13:28.829460  256773 ssh_runner.go:195] Run: crio config
	I1109 14:13:28.872922  256773 cni.go:84] Creating CNI manager for ""
	I1109 14:13:28.872943  256773 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:13:28.872959  256773 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:13:28.872977  256773 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-326524 NodeName:default-k8s-diff-port-326524 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:13:28.873104  256773 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-326524"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:13:28.873154  256773 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:13:28.880608  256773 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:13:28.880678  256773 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:13:28.888093  256773 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1109 14:13:28.900053  256773 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:13:28.914248  256773 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1109 14:13:28.925998  256773 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:13:28.929269  256773 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:13:28.938258  256773 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:13:29.015833  256773 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:13:29.038481  256773 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524 for IP: 192.168.85.2
	I1109 14:13:29.038497  256773 certs.go:195] generating shared ca certs ...
	I1109 14:13:29.038515  256773 certs.go:227] acquiring lock for ca certs: {Name:mk1fbc7fb5aaf0e87090d75145f91c095ef07289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:29.038714  256773 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key
	I1109 14:13:29.038786  256773 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key
	I1109 14:13:29.038803  256773 certs.go:257] generating profile certs ...
	I1109 14:13:29.038872  256773 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/client.key
	I1109 14:13:29.038905  256773 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/client.crt with IP's: []
	I1109 14:13:29.295188  256773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/client.crt ...
	I1109 14:13:29.295214  256773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/client.crt: {Name:mkc65c63e5dfb9f6a1cb414fc8819b33b9769de4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:29.295397  256773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/client.key ...
	I1109 14:13:29.295415  256773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/client.key: {Name:mk52e554adae895ad33151aafa7eddfb170ea52b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:29.295530  256773 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.key.cfdee782
	I1109 14:13:29.295550  256773 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.crt.cfdee782 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1109 14:13:29.438993  256773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.crt.cfdee782 ...
	I1109 14:13:29.439017  256773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.crt.cfdee782: {Name:mk0622eef1394efac7c41e0f0df9ef51ed04883f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:29.439161  256773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.key.cfdee782 ...
	I1109 14:13:29.439176  256773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.key.cfdee782: {Name:mke14e92c7835ad99d5db72cbf2707d98d6044c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:29.439271  256773 certs.go:382] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.crt.cfdee782 -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.crt
	I1109 14:13:29.439379  256773 certs.go:386] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.key.cfdee782 -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.key
	I1109 14:13:29.439470  256773 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.key
	I1109 14:13:29.439492  256773 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.crt with IP's: []
	I1109 14:13:29.650804  256773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.crt ...
	I1109 14:13:29.650826  256773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.crt: {Name:mk38045e7500a345773acabac6a8a7407942a901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:29.650974  256773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.key ...
	I1109 14:13:29.650992  256773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.key: {Name:mk1b80d178593e643d8fba0be11b96c767a5965f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:29.651184  256773 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem (1338 bytes)
	W1109 14:13:29.651220  256773 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365_empty.pem, impossibly tiny 0 bytes
	I1109 14:13:29.651228  256773 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:13:29.651248  256773 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 14:13:29.651269  256773 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:13:29.651292  256773 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 14:13:29.651330  256773 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:13:29.651872  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:13:29.671783  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:13:29.689915  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:13:29.708380  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:13:29.727345  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:13:29.748028  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:13:29.766716  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:13:29.785630  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:13:29.803761  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem --> /usr/share/ca-certificates/9365.pem (1338 bytes)
	I1109 14:13:29.823198  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /usr/share/ca-certificates/93652.pem (1708 bytes)
	I1109 14:13:29.841761  256773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:13:29.859990  256773 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:13:29.872969  256773 ssh_runner.go:195] Run: openssl version
	I1109 14:13:29.879976  256773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9365.pem && ln -fs /usr/share/ca-certificates/9365.pem /etc/ssl/certs/9365.pem"
	I1109 14:13:29.888848  256773 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9365.pem
	I1109 14:13:29.892308  256773 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:34 /usr/share/ca-certificates/9365.pem
	I1109 14:13:29.892349  256773 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9365.pem
	I1109 14:13:29.934091  256773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9365.pem /etc/ssl/certs/51391683.0"
	I1109 14:13:29.942269  256773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93652.pem && ln -fs /usr/share/ca-certificates/93652.pem /etc/ssl/certs/93652.pem"
	I1109 14:13:29.950128  256773 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93652.pem
	I1109 14:13:29.953488  256773 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:34 /usr/share/ca-certificates/93652.pem
	I1109 14:13:29.953543  256773 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93652.pem
	I1109 14:13:29.987989  256773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93652.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:13:29.995929  256773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:13:30.003798  256773 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:13:30.007456  256773 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:13:30.007501  256773 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:13:30.041336  256773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:13:30.049027  256773 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:13:30.052388  256773 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:13:30.052432  256773 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-326524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-326524 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:13:30.052504  256773 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:13:30.052539  256773 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:13:30.077541  256773 cri.go:89] found id: ""
	I1109 14:13:30.077599  256773 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:13:30.084834  256773 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:13:30.092103  256773 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:13:30.092142  256773 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:13:30.099409  256773 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:13:30.099425  256773 kubeadm.go:158] found existing configuration files:
	
	I1109 14:13:30.099460  256773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1109 14:13:30.106576  256773 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:13:30.106623  256773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:13:30.113515  256773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1109 14:13:30.120502  256773 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:13:30.120551  256773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:13:30.127514  256773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1109 14:13:30.134534  256773 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:13:30.134569  256773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:13:30.141670  256773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1109 14:13:30.148660  256773 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:13:30.148705  256773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:13:30.155685  256773 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:13:30.215260  256773 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1109 14:13:30.272315  256773 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1109 14:13:31.272385  250803 pod_ready.go:104] pod "coredns-66bc5c9577-6ssc5" is not "Ready", error: <nil>
	I1109 14:13:32.270386  250803 pod_ready.go:94] pod "coredns-66bc5c9577-6ssc5" is "Ready"
	I1109 14:13:32.270416  250803 pod_ready.go:86] duration metric: took 31.005317937s for pod "coredns-66bc5c9577-6ssc5" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:32.272632  250803 pod_ready.go:83] waiting for pod "etcd-no-preload-152932" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:32.276568  250803 pod_ready.go:94] pod "etcd-no-preload-152932" is "Ready"
	I1109 14:13:32.276589  250803 pod_ready.go:86] duration metric: took 3.926488ms for pod "etcd-no-preload-152932" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:32.278476  250803 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-152932" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:32.282014  250803 pod_ready.go:94] pod "kube-apiserver-no-preload-152932" is "Ready"
	I1109 14:13:32.282034  250803 pod_ready.go:86] duration metric: took 3.536509ms for pod "kube-apiserver-no-preload-152932" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:32.283772  250803 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-152932" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:32.468531  250803 pod_ready.go:94] pod "kube-controller-manager-no-preload-152932" is "Ready"
	I1109 14:13:32.468557  250803 pod_ready.go:86] duration metric: took 184.768044ms for pod "kube-controller-manager-no-preload-152932" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:32.668373  250803 pod_ready.go:83] waiting for pod "kube-proxy-f5tgg" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:33.068187  250803 pod_ready.go:94] pod "kube-proxy-f5tgg" is "Ready"
	I1109 14:13:33.068218  250803 pod_ready.go:86] duration metric: took 399.821537ms for pod "kube-proxy-f5tgg" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:33.267821  250803 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-152932" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:33.668556  250803 pod_ready.go:94] pod "kube-scheduler-no-preload-152932" is "Ready"
	I1109 14:13:33.668585  250803 pod_ready.go:86] duration metric: took 400.741192ms for pod "kube-scheduler-no-preload-152932" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:13:33.668597  250803 pod_ready.go:40] duration metric: took 32.406224537s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:13:33.710914  250803 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:13:33.713326  250803 out.go:179] * Done! kubectl is now configured to use "no-preload-152932" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:12:58 old-k8s-version-169816 crio[557]: time="2025-11-09T14:12:58.27559591Z" level=info msg="Created container 9d2477a32ffe81fe2b753f7e91fc77dbab0cbafcfb806221411b016a92e93c58: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v6s8t/kubernetes-dashboard" id=b7afe467-f3db-420d-a5e5-78e2fbd19fb9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:12:58 old-k8s-version-169816 crio[557]: time="2025-11-09T14:12:58.276143356Z" level=info msg="Starting container: 9d2477a32ffe81fe2b753f7e91fc77dbab0cbafcfb806221411b016a92e93c58" id=d728fed4-7bf0-40fd-a8a9-03a88e6b0895 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:12:58 old-k8s-version-169816 crio[557]: time="2025-11-09T14:12:58.278306746Z" level=info msg="Started container" PID=1715 containerID=9d2477a32ffe81fe2b753f7e91fc77dbab0cbafcfb806221411b016a92e93c58 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v6s8t/kubernetes-dashboard id=d728fed4-7bf0-40fd-a8a9-03a88e6b0895 name=/runtime.v1.RuntimeService/StartContainer sandboxID=960e2360dc7c2cb6c6b31cc05d372ad271f1d47661ff91c557a871d7460e3ccd
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.853002745Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3bbe61e6-3341-4b24-80ce-cac86b167177 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.854884734Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9bd6ecd0-2e4e-4a76-a2d9-e69fbf385e6f name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.855858751Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=773f5dac-5da6-4986-8f1a-978c931fdecd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.856007944Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.894822993Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.895012439Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/12bc3f2302b6a87d78efe9c57b55ebbb99e5e213e4177e4420a30df60ed12bf9/merged/etc/passwd: no such file or directory"
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.895048831Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/12bc3f2302b6a87d78efe9c57b55ebbb99e5e213e4177e4420a30df60ed12bf9/merged/etc/group: no such file or directory"
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.89540615Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.939228585Z" level=info msg="Created container e556b09a1766377ab449cade3e737046d611f33f22a514ef8afa52ec096e82fa: kube-system/storage-provisioner/storage-provisioner" id=773f5dac-5da6-4986-8f1a-978c931fdecd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.939958351Z" level=info msg="Starting container: e556b09a1766377ab449cade3e737046d611f33f22a514ef8afa52ec096e82fa" id=9411a07c-4e67-4a1e-90a7-8703bc353930 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:13:10 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:10.942529944Z" level=info msg="Started container" PID=1741 containerID=e556b09a1766377ab449cade3e737046d611f33f22a514ef8afa52ec096e82fa description=kube-system/storage-provisioner/storage-provisioner id=9411a07c-4e67-4a1e-90a7-8703bc353930 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e05cb90da0cc71f2f72a5039ff731f9be1046b40952ee547ed985403d2317a72
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.715253214Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=67487a26-0477-4e72-8c82-bc968737bd4b name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.716156444Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0b92d8bc-8621-427c-a8a8-365739936ca5 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.717212797Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5/dashboard-metrics-scraper" id=5792e33b-30bc-43d6-849d-68be1477bbcc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.717351318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.722753441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.723221725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.761273685Z" level=info msg="Created container 02e49d47c1f9e63a3a265c977f212e64f614d971a7730fcecb06fa865e3fd2bd: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5/dashboard-metrics-scraper" id=5792e33b-30bc-43d6-849d-68be1477bbcc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.761847493Z" level=info msg="Starting container: 02e49d47c1f9e63a3a265c977f212e64f614d971a7730fcecb06fa865e3fd2bd" id=05fa6e22-79e4-4d3e-bbfa-934b805ea5b3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.763702929Z" level=info msg="Started container" PID=1757 containerID=02e49d47c1f9e63a3a265c977f212e64f614d971a7730fcecb06fa865e3fd2bd description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5/dashboard-metrics-scraper id=05fa6e22-79e4-4d3e-bbfa-934b805ea5b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa307505115a44260998748bbcc026545787b7126bb2aaf1164616ee796ea1b2
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.86182002Z" level=info msg="Removing container: 6528cbb2efb76eb233f386f2a820d40289b92223061828ff768d532889b21a5c" id=d76666bd-45b7-4e53-b18d-607245c38d0e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:13:12 old-k8s-version-169816 crio[557]: time="2025-11-09T14:13:12.874561193Z" level=info msg="Removed container 6528cbb2efb76eb233f386f2a820d40289b92223061828ff768d532889b21a5c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5/dashboard-metrics-scraper" id=d76666bd-45b7-4e53-b18d-607245c38d0e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	02e49d47c1f9e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   aa307505115a4       dashboard-metrics-scraper-5f989dc9cf-cqjl5       kubernetes-dashboard
	e556b09a17663       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   e05cb90da0cc7       storage-provisioner                              kube-system
	9d2477a32ffe8       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago       Running             kubernetes-dashboard        0                   960e2360dc7c2       kubernetes-dashboard-8694d4445c-v6s8t            kubernetes-dashboard
	545a19c0aceb7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           57 seconds ago       Running             coredns                     0                   47823405d265a       coredns-5dd5756b68-5bgfs                         kube-system
	81342538d388d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   6cde96862115f       busybox                                          default
	bcf75d94a9dc6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   9a08d20d54b37       kindnet-mjzvm                                    kube-system
	ebbd92bd47e4f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   e05cb90da0cc7       storage-provisioner                              kube-system
	540482e832269       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           57 seconds ago       Running             kube-proxy                  0                   2d8c7680218f1       kube-proxy-96xbm                                 kube-system
	42a9a6c58384f       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   cd3104eae1688       kube-controller-manager-old-k8s-version-169816   kube-system
	2b36ea96b2622       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   8e2816a4457dc       etcd-old-k8s-version-169816                      kube-system
	fe1074945f471       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   1e8adf2fa35fc       kube-scheduler-old-k8s-version-169816            kube-system
	d602ff875b92b       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   0de44075c4fda       kube-apiserver-old-k8s-version-169816            kube-system
	
	
	==> coredns [545a19c0aceb77a225bc0b4f41cc94737c4b393be192c5431942f1ca5716bb80] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39718 - 61765 "HINFO IN 7045445207232453828.122638483308380106. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.470636794s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-169816
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-169816
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=old-k8s-version-169816
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_11_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:11:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-169816
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:13:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:13:09 +0000   Sun, 09 Nov 2025 14:11:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:13:09 +0000   Sun, 09 Nov 2025 14:11:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:13:09 +0000   Sun, 09 Nov 2025 14:11:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:13:09 +0000   Sun, 09 Nov 2025 14:12:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-169816
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                11632483-d582-4ced-bfcd-ac7706e38a54
	  Boot ID:                    f61b57f8-a461-4dd6-b921-37a11bd88f0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-5bgfs                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-old-k8s-version-169816                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m4s
	  kube-system                 kindnet-mjzvm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-old-k8s-version-169816             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-169816    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-96xbm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-old-k8s-version-169816             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-cqjl5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-v6s8t             0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 109s                 kube-proxy       
	  Normal  Starting                 57s                  kube-proxy       
	  Normal  Starting                 2m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m9s (x9 over 2m9s)  kubelet          Node old-k8s-version-169816 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-169816 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x7 over 2m9s)  kubelet          Node old-k8s-version-169816 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m4s                 kubelet          Node old-k8s-version-169816 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m4s                 kubelet          Node old-k8s-version-169816 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m4s                 kubelet          Node old-k8s-version-169816 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s                 node-controller  Node old-k8s-version-169816 event: Registered Node old-k8s-version-169816 in Controller
	  Normal  NodeReady                97s                  kubelet          Node old-k8s-version-169816 status is now: NodeReady
	  Normal  Starting                 62s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)    kubelet          Node old-k8s-version-169816 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)    kubelet          Node old-k8s-version-169816 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)    kubelet          Node old-k8s-version-169816 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                  node-controller  Node old-k8s-version-169816 event: Registered Node old-k8s-version-169816 in Controller
	
	
	==> dmesg <==
	[  +0.090313] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.571631] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 9 13:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.019189] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023897] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +2.047759] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +4.031604] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +8.447123] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[ +16.382306] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 13:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	
	
	==> etcd [2b36ea96b26225b43c2ec83d436d026e38a7613c24eadfbcb3d971fe39d0671b] <==
	{"level":"info","ts":"2025-11-09T14:12:39.234282Z","caller":"traceutil/trace.go:171","msg":"trace[124771433] transaction","detail":"{read_only:false; response_revision:469; number_of_response:1; }","duration":"108.262954ms","start":"2025-11-09T14:12:39.126004Z","end":"2025-11-09T14:12:39.234267Z","steps":["trace[124771433] 'process raft request'  (duration: 108.211313ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:12:39.661978Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.207618ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356531688323987 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/old-k8s-version-169816.18765c16769cae31\" mod_revision:467 > success:<request_put:<key:\"/registry/events/default/old-k8s-version-169816.18765c16769cae31\" value_size:654 lease:6414984494833548152 >> failure:<request_range:<key:\"/registry/events/default/old-k8s-version-169816.18765c16769cae31\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-09T14:12:39.662166Z","caller":"traceutil/trace.go:171","msg":"trace[1172058608] linearizableReadLoop","detail":"{readStateIndex:495; appliedIndex:493; }","duration":"272.069378ms","start":"2025-11-09T14:12:39.390085Z","end":"2025-11-09T14:12:39.662154Z","steps":["trace[1172058608] 'read index received'  (duration: 34.317457ms)","trace[1172058608] 'applied index is now lower than readState.Index'  (duration: 237.751114ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:12:39.662164Z","caller":"traceutil/trace.go:171","msg":"trace[1996140217] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"360.064314ms","start":"2025-11-09T14:12:39.302075Z","end":"2025-11-09T14:12:39.66214Z","steps":["trace[1996140217] 'process raft request'  (duration: 122.242942ms)","trace[1996140217] 'compare'  (duration: 237.105402ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:12:39.66225Z","caller":"traceutil/trace.go:171","msg":"trace[1439015845] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"358.774934ms","start":"2025-11-09T14:12:39.303459Z","end":"2025-11-09T14:12:39.662234Z","steps":["trace[1439015845] 'process raft request'  (duration: 358.620674ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:12:39.662285Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-09T14:12:39.302059Z","time spent":"360.16749ms","remote":"127.0.0.1:34908","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":736,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/old-k8s-version-169816.18765c16769cae31\" mod_revision:467 > success:<request_put:<key:\"/registry/events/default/old-k8s-version-169816.18765c16769cae31\" value_size:654 lease:6414984494833548152 >> failure:<request_range:<key:\"/registry/events/default/old-k8s-version-169816.18765c16769cae31\" > >"}
	{"level":"warn","ts":"2025-11-09T14:12:39.66234Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.26687ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:721"}
	{"level":"warn","ts":"2025-11-09T14:12:39.662346Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-09T14:12:39.303438Z","time spent":"358.853322ms","remote":"127.0.0.1:35032","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4299,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-old-k8s-version-169816\" mod_revision:320 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-old-k8s-version-169816\" value_size:4227 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-old-k8s-version-169816\" > >"}
	{"level":"info","ts":"2025-11-09T14:12:39.662367Z","caller":"traceutil/trace.go:171","msg":"trace[1058092481] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:474; }","duration":"272.301034ms","start":"2025-11-09T14:12:39.390058Z","end":"2025-11-09T14:12:39.662359Z","steps":["trace[1058092481] 'agreement among raft nodes before linearized reading'  (duration: 272.171009ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:12:39.662462Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.673824ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" ","response":"range_response_count:1 size:442"}
	{"level":"info","ts":"2025-11-09T14:12:39.662488Z","caller":"traceutil/trace.go:171","msg":"trace[440316120] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:474; }","duration":"121.698179ms","start":"2025-11-09T14:12:39.54078Z","end":"2025-11-09T14:12:39.662479Z","steps":["trace[440316120] 'agreement among raft nodes before linearized reading'  (duration: 121.64525ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:12:39.662628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.869337ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" ","response":"range_response_count:65 size:58983"}
	{"level":"info","ts":"2025-11-09T14:12:39.662684Z","caller":"traceutil/trace.go:171","msg":"trace[1664488522] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:65; response_revision:474; }","duration":"121.92896ms","start":"2025-11-09T14:12:39.540746Z","end":"2025-11-09T14:12:39.662675Z","steps":["trace[1664488522] 'agreement among raft nodes before linearized reading'  (duration: 121.51431ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:12:39.975104Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.370931ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356531688324006 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/namespaces/kubernetes-dashboard\" mod_revision:0 > success:<request_put:<key:\"/registry/namespaces/kubernetes-dashboard\" value_size:833 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-09T14:12:39.975257Z","caller":"traceutil/trace.go:171","msg":"trace[1746771980] transaction","detail":"{read_only:false; response_revision:479; number_of_response:1; }","duration":"276.001983ms","start":"2025-11-09T14:12:39.699247Z","end":"2025-11-09T14:12:39.975249Z","steps":["trace[1746771980] 'process raft request'  (duration: 275.936526ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:12:39.975264Z","caller":"traceutil/trace.go:171","msg":"trace[1291155925] transaction","detail":"{read_only:false; response_revision:478; number_of_response:1; }","duration":"282.066489ms","start":"2025-11-09T14:12:39.693172Z","end":"2025-11-09T14:12:39.975238Z","steps":["trace[1291155925] 'process raft request'  (duration: 130.512839ms)","trace[1291155925] 'compare'  (duration: 151.269685ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:12:39.975259Z","caller":"traceutil/trace.go:171","msg":"trace[986209878] linearizableReadLoop","detail":"{readStateIndex:499; appliedIndex:498; }","duration":"280.989839ms","start":"2025-11-09T14:12:39.694251Z","end":"2025-11-09T14:12:39.975241Z","steps":["trace[986209878] 'read index received'  (duration: 129.44398ms)","trace[986209878] 'applied index is now lower than readState.Index'  (duration: 151.543555ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:12:39.975353Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"281.140346ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/old-k8s-version-169816.18765c16769cae31\" ","response":"range_response_count:1 size:751"}
	{"level":"info","ts":"2025-11-09T14:12:39.975492Z","caller":"traceutil/trace.go:171","msg":"trace[1794406723] range","detail":"{range_begin:/registry/events/default/old-k8s-version-169816.18765c16769cae31; range_end:; response_count:1; response_revision:479; }","duration":"281.228776ms","start":"2025-11-09T14:12:39.694197Z","end":"2025-11-09T14:12:39.975425Z","steps":["trace[1794406723] 'agreement among raft nodes before linearized reading'  (duration: 281.081664ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:12:39.975535Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.897299ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:public-info-viewer\" ","response":"range_response_count:1 size:783"}
	{"level":"info","ts":"2025-11-09T14:12:39.975589Z","caller":"traceutil/trace.go:171","msg":"trace[598876810] range","detail":"{range_begin:/registry/clusterrolebindings/system:public-info-viewer; range_end:; response_count:1; response_revision:479; }","duration":"281.0014ms","start":"2025-11-09T14:12:39.694576Z","end":"2025-11-09T14:12:39.975578Z","steps":["trace[598876810] 'agreement among raft nodes before linearized reading'  (duration: 280.862541ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:12:39.97554Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"276.985689ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1122"}
	{"level":"info","ts":"2025-11-09T14:12:39.975797Z","caller":"traceutil/trace.go:171","msg":"trace[646892152] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:479; }","duration":"277.242713ms","start":"2025-11-09T14:12:39.698541Z","end":"2025-11-09T14:12:39.975784Z","steps":["trace[646892152] 'agreement among raft nodes before linearized reading'  (duration: 276.949729ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:21.79142Z","caller":"traceutil/trace.go:171","msg":"trace[2056334423] transaction","detail":"{read_only:false; response_revision:648; number_of_response:1; }","duration":"122.70015ms","start":"2025-11-09T14:13:21.668699Z","end":"2025-11-09T14:13:21.791399Z","steps":["trace[2056334423] 'process raft request'  (duration: 122.486354ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:21.873366Z","caller":"traceutil/trace.go:171","msg":"trace[986599777] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"201.695883ms","start":"2025-11-09T14:13:21.67165Z","end":"2025-11-09T14:13:21.873346Z","steps":["trace[986599777] 'process raft request'  (duration: 201.586469ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:13:37 up 56 min,  0 user,  load average: 2.95, 2.85, 1.87
	Linux old-k8s-version-169816 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bcf75d94a9dc6663fd1f0d1a24e10fdcd1c666fa884d21235773bb0d377856fc] <==
	I1109 14:12:40.263975       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:12:40.264257       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1109 14:12:40.264414       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:12:40.264436       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:12:40.264466       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:12:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:12:40.525995       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:12:40.526104       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:12:40.526124       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:12:40.526327       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1109 14:12:40.826757       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:12:40.826788       1 metrics.go:72] Registering metrics
	I1109 14:12:40.826856       1 controller.go:711] "Syncing nftables rules"
	I1109 14:12:50.526710       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1109 14:12:50.526789       1 main.go:301] handling current node
	I1109 14:13:00.526831       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1109 14:13:00.526885       1 main.go:301] handling current node
	I1109 14:13:10.526869       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1109 14:13:10.526902       1 main.go:301] handling current node
	I1109 14:13:20.527164       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1109 14:13:20.527197       1 main.go:301] handling current node
	I1109 14:13:30.532739       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1109 14:13:30.532785       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d602ff875b92b7937f6fd0b9e58ec36e97373d0bb858bcc87ab19cd3955c7caa] <==
	I1109 14:12:38.639941       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1109 14:12:38.639967       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:12:38.643559       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1109 14:12:38.643594       1 aggregator.go:166] initial CRD sync complete...
	I1109 14:12:38.643605       1 autoregister_controller.go:141] Starting autoregister controller
	I1109 14:12:38.643610       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:12:38.643615       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:12:38.680009       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:12:39.063627       1 trace.go:236] Trace[463023398]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b2dc087e-3739-423c-a867-49b94d07ddd2,client:192.168.76.2,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.28.0 (linux/amd64) kubernetes/855e7c4,verb:POST (09-Nov-2025 14:12:38.558) (total time: 504ms):
	Trace[463023398]: [504.827686ms] [504.827686ms] END
	E1109 14:12:39.107437       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1109 14:12:39.107596       1 trace.go:236] Trace[856774394]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:eee20d90-40b3-4444-bbb2-95f46f406f66,client:192.168.76.2,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes,user-agent:kubelet/v1.28.0 (linux/amd64) kubernetes/855e7c4,verb:POST (09-Nov-2025 14:12:38.560) (total time: 546ms):
	Trace[856774394]: ---"Write to database call failed" len:4049,err:nodes "old-k8s-version-169816" already exists 184ms (14:12:39.107)
	Trace[856774394]: [546.599597ms] [546.599597ms] END
	I1109 14:12:39.665708       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:12:39.690355       1 controller.go:624] quota admission added evaluator for: namespaces
	I1109 14:12:40.029141       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1109 14:12:40.064568       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:12:40.087225       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:12:40.101339       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1109 14:12:40.148476       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.98.248"}
	I1109 14:12:40.164024       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.223.89"}
	I1109 14:12:51.211879       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:12:51.222820       1 controller.go:624] quota admission added evaluator for: endpoints
	I1109 14:12:51.347762       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [42a9a6c58384f12f9ac88b28ed9881f46da5d5ba7cacd3e83d0b643736dfe489] <==
	I1109 14:12:51.350866       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1109 14:12:51.351396       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1109 14:12:51.353692       1 shared_informer.go:318] Caches are synced for disruption
	I1109 14:12:51.358800       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-cqjl5"
	I1109 14:12:51.359031       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-v6s8t"
	I1109 14:12:51.364585       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="13.690806ms"
	I1109 14:12:51.365498       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.01335ms"
	I1109 14:12:51.372289       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.654722ms"
	I1109 14:12:51.372316       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.770592ms"
	I1109 14:12:51.372371       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.125µs"
	I1109 14:12:51.372392       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.943µs"
	I1109 14:12:51.374730       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="35.185µs"
	I1109 14:12:51.382849       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="42.465µs"
	I1109 14:12:51.724961       1 shared_informer.go:318] Caches are synced for garbage collector
	I1109 14:12:51.724987       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1109 14:12:51.736112       1 shared_informer.go:318] Caches are synced for garbage collector
	I1109 14:12:54.825405       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.99µs"
	I1109 14:12:55.823863       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="146.322µs"
	I1109 14:12:56.833690       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="132.578µs"
	I1109 14:12:58.848695       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.166292ms"
	I1109 14:12:58.849035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="78.192µs"
	I1109 14:13:12.871930       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.666µs"
	I1109 14:13:18.816453       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.821579ms"
	I1109 14:13:18.816581       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.442µs"
	I1109 14:13:21.875354       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.804µs"
	
	
	==> kube-proxy [540482e832269305a074529fb0e1c0638067596f1030ebd9cff2130f4a71b8d0] <==
	I1109 14:12:40.120137       1 server_others.go:69] "Using iptables proxy"
	I1109 14:12:40.133619       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1109 14:12:40.159711       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:12:40.162701       1 server_others.go:152] "Using iptables Proxier"
	I1109 14:12:40.162739       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1109 14:12:40.162750       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1109 14:12:40.162797       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1109 14:12:40.163465       1 server.go:846] "Version info" version="v1.28.0"
	I1109 14:12:40.163591       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:12:40.165430       1 config.go:188] "Starting service config controller"
	I1109 14:12:40.165450       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1109 14:12:40.165487       1 config.go:97] "Starting endpoint slice config controller"
	I1109 14:12:40.165492       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1109 14:12:40.164565       1 config.go:315] "Starting node config controller"
	I1109 14:12:40.165518       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1109 14:12:40.265786       1 shared_informer.go:318] Caches are synced for node config
	I1109 14:12:40.265788       1 shared_informer.go:318] Caches are synced for service config
	I1109 14:12:40.265806       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [fe1074945f47108035d7de260124d948b1a6cc022b75173093c351eed9c62fe8] <==
	I1109 14:12:36.638344       1 serving.go:348] Generated self-signed cert in-memory
	W1109 14:12:38.588841       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 14:12:38.588874       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 14:12:38.588890       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 14:12:38.588900       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 14:12:38.615431       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1109 14:12:38.615452       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:12:38.616671       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:12:38.616702       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1109 14:12:38.617558       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1109 14:12:38.617582       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1109 14:12:38.717485       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 09 14:12:51 old-k8s-version-169816 kubelet[715]: I1109 14:12:51.364367     715 topology_manager.go:215] "Topology Admit Handler" podUID="b40e7490-7646-4e1e-a89a-0936a3e8ca71" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-v6s8t"
	Nov 09 14:12:51 old-k8s-version-169816 kubelet[715]: I1109 14:12:51.446942     715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt8m6\" (UniqueName: \"kubernetes.io/projected/aa8a7864-ba51-4e08-88fe-3f4eab718219-kube-api-access-pt8m6\") pod \"dashboard-metrics-scraper-5f989dc9cf-cqjl5\" (UID: \"aa8a7864-ba51-4e08-88fe-3f4eab718219\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5"
	Nov 09 14:12:51 old-k8s-version-169816 kubelet[715]: I1109 14:12:51.446998     715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/aa8a7864-ba51-4e08-88fe-3f4eab718219-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-cqjl5\" (UID: \"aa8a7864-ba51-4e08-88fe-3f4eab718219\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5"
	Nov 09 14:12:51 old-k8s-version-169816 kubelet[715]: I1109 14:12:51.547426     715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b40e7490-7646-4e1e-a89a-0936a3e8ca71-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-v6s8t\" (UID: \"b40e7490-7646-4e1e-a89a-0936a3e8ca71\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v6s8t"
	Nov 09 14:12:51 old-k8s-version-169816 kubelet[715]: I1109 14:12:51.547483     715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tkdr\" (UniqueName: \"kubernetes.io/projected/b40e7490-7646-4e1e-a89a-0936a3e8ca71-kube-api-access-8tkdr\") pod \"kubernetes-dashboard-8694d4445c-v6s8t\" (UID: \"b40e7490-7646-4e1e-a89a-0936a3e8ca71\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v6s8t"
	Nov 09 14:12:54 old-k8s-version-169816 kubelet[715]: I1109 14:12:54.805326     715 scope.go:117] "RemoveContainer" containerID="7458fb4fcd4b802d45cf24db515248f34a335fa27699fa6f8578fdb8297b51d6"
	Nov 09 14:12:55 old-k8s-version-169816 kubelet[715]: I1109 14:12:55.809866     715 scope.go:117] "RemoveContainer" containerID="7458fb4fcd4b802d45cf24db515248f34a335fa27699fa6f8578fdb8297b51d6"
	Nov 09 14:12:55 old-k8s-version-169816 kubelet[715]: I1109 14:12:55.810069     715 scope.go:117] "RemoveContainer" containerID="6528cbb2efb76eb233f386f2a820d40289b92223061828ff768d532889b21a5c"
	Nov 09 14:12:55 old-k8s-version-169816 kubelet[715]: E1109 14:12:55.810482     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-cqjl5_kubernetes-dashboard(aa8a7864-ba51-4e08-88fe-3f4eab718219)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5" podUID="aa8a7864-ba51-4e08-88fe-3f4eab718219"
	Nov 09 14:12:56 old-k8s-version-169816 kubelet[715]: I1109 14:12:56.816522     715 scope.go:117] "RemoveContainer" containerID="6528cbb2efb76eb233f386f2a820d40289b92223061828ff768d532889b21a5c"
	Nov 09 14:12:56 old-k8s-version-169816 kubelet[715]: E1109 14:12:56.817298     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-cqjl5_kubernetes-dashboard(aa8a7864-ba51-4e08-88fe-3f4eab718219)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5" podUID="aa8a7864-ba51-4e08-88fe-3f4eab718219"
	Nov 09 14:12:58 old-k8s-version-169816 kubelet[715]: I1109 14:12:58.836347     715 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v6s8t" podStartSLOduration=1.296745624 podCreationTimestamp="2025-11-09 14:12:51 +0000 UTC" firstStartedPulling="2025-11-09 14:12:51.687535591 +0000 UTC m=+16.072416886" lastFinishedPulling="2025-11-09 14:12:58.227076979 +0000 UTC m=+22.611958271" observedRunningTime="2025-11-09 14:12:58.835691575 +0000 UTC m=+23.220572878" watchObservedRunningTime="2025-11-09 14:12:58.836287009 +0000 UTC m=+23.221168313"
	Nov 09 14:13:01 old-k8s-version-169816 kubelet[715]: I1109 14:13:01.665121     715 scope.go:117] "RemoveContainer" containerID="6528cbb2efb76eb233f386f2a820d40289b92223061828ff768d532889b21a5c"
	Nov 09 14:13:01 old-k8s-version-169816 kubelet[715]: E1109 14:13:01.665467     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-cqjl5_kubernetes-dashboard(aa8a7864-ba51-4e08-88fe-3f4eab718219)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5" podUID="aa8a7864-ba51-4e08-88fe-3f4eab718219"
	Nov 09 14:13:10 old-k8s-version-169816 kubelet[715]: I1109 14:13:10.852521     715 scope.go:117] "RemoveContainer" containerID="ebbd92bd47e4fe8bee5069fb179364ea166dbcf2c0168bd25cf3692734490d1b"
	Nov 09 14:13:12 old-k8s-version-169816 kubelet[715]: I1109 14:13:12.714716     715 scope.go:117] "RemoveContainer" containerID="6528cbb2efb76eb233f386f2a820d40289b92223061828ff768d532889b21a5c"
	Nov 09 14:13:12 old-k8s-version-169816 kubelet[715]: I1109 14:13:12.860668     715 scope.go:117] "RemoveContainer" containerID="6528cbb2efb76eb233f386f2a820d40289b92223061828ff768d532889b21a5c"
	Nov 09 14:13:12 old-k8s-version-169816 kubelet[715]: I1109 14:13:12.860924     715 scope.go:117] "RemoveContainer" containerID="02e49d47c1f9e63a3a265c977f212e64f614d971a7730fcecb06fa865e3fd2bd"
	Nov 09 14:13:12 old-k8s-version-169816 kubelet[715]: E1109 14:13:12.861283     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-cqjl5_kubernetes-dashboard(aa8a7864-ba51-4e08-88fe-3f4eab718219)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5" podUID="aa8a7864-ba51-4e08-88fe-3f4eab718219"
	Nov 09 14:13:21 old-k8s-version-169816 kubelet[715]: I1109 14:13:21.665435     715 scope.go:117] "RemoveContainer" containerID="02e49d47c1f9e63a3a265c977f212e64f614d971a7730fcecb06fa865e3fd2bd"
	Nov 09 14:13:21 old-k8s-version-169816 kubelet[715]: E1109 14:13:21.665834     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-cqjl5_kubernetes-dashboard(aa8a7864-ba51-4e08-88fe-3f4eab718219)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-cqjl5" podUID="aa8a7864-ba51-4e08-88fe-3f4eab718219"
	Nov 09 14:13:32 old-k8s-version-169816 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:13:32 old-k8s-version-169816 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:13:32 old-k8s-version-169816 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 09 14:13:32 old-k8s-version-169816 systemd[1]: kubelet.service: Consumed 1.513s CPU time.
	
	
	==> kubernetes-dashboard [9d2477a32ffe81fe2b753f7e91fc77dbab0cbafcfb806221411b016a92e93c58] <==
	2025/11/09 14:12:58 Starting overwatch
	2025/11/09 14:12:58 Using namespace: kubernetes-dashboard
	2025/11/09 14:12:58 Using in-cluster config to connect to apiserver
	2025/11/09 14:12:58 Using secret token for csrf signing
	2025/11/09 14:12:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/09 14:12:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/09 14:12:58 Successful initial request to the apiserver, version: v1.28.0
	2025/11/09 14:12:58 Generating JWE encryption key
	2025/11/09 14:12:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/09 14:12:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/09 14:12:58 Initializing JWE encryption key from synchronized object
	2025/11/09 14:12:58 Creating in-cluster Sidecar client
	2025/11/09 14:12:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:12:58 Serving insecurely on HTTP port: 9090
	2025/11/09 14:13:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [e556b09a1766377ab449cade3e737046d611f33f22a514ef8afa52ec096e82fa] <==
	I1109 14:13:10.953956       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:13:10.962481       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:13:10.962525       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1109 14:13:28.385210       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:13:28.385401       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-169816_41dc1996-a973-44fe-b7b1-06181a889cfb!
	I1109 14:13:28.385730       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c4c27452-9f34-4b03-8815-bd5ff2390444", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-169816_41dc1996-a973-44fe-b7b1-06181a889cfb became leader
	I1109 14:13:28.485951       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-169816_41dc1996-a973-44fe-b7b1-06181a889cfb!
	
	
	==> storage-provisioner [ebbd92bd47e4fe8bee5069fb179364ea166dbcf2c0168bd25cf3692734490d1b] <==
	I1109 14:12:40.085226       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1109 14:13:10.087787       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-169816 -n old-k8s-version-169816
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-169816 -n old-k8s-version-169816: exit status 2 (314.519547ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-169816 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-152932 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-152932 --alsologtostderr -v=1: exit status 80 (1.857487602s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-152932 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:13:45.484678  263432 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:13:45.484767  263432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:13:45.484774  263432 out.go:374] Setting ErrFile to fd 2...
	I1109 14:13:45.484778  263432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:13:45.484983  263432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:13:45.485195  263432 out.go:368] Setting JSON to false
	I1109 14:13:45.485246  263432 mustload.go:66] Loading cluster: no-preload-152932
	I1109 14:13:45.485587  263432 config.go:182] Loaded profile config "no-preload-152932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:13:45.486039  263432 cli_runner.go:164] Run: docker container inspect no-preload-152932 --format={{.State.Status}}
	I1109 14:13:45.504270  263432 host.go:66] Checking if "no-preload-152932" exists ...
	I1109 14:13:45.504517  263432 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:13:45.558767  263432 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:56 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-09 14:13:45.547164167 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:13:45.559426  263432 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-152932 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1109 14:13:45.685782  263432 out.go:179] * Pausing node no-preload-152932 ... 
	I1109 14:13:45.717401  263432 host.go:66] Checking if "no-preload-152932" exists ...
	I1109 14:13:45.717803  263432 ssh_runner.go:195] Run: systemctl --version
	I1109 14:13:45.717856  263432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-152932
	I1109 14:13:45.737136  263432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/no-preload-152932/id_rsa Username:docker}
	I1109 14:13:45.831091  263432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:13:45.842914  263432 pause.go:52] kubelet running: true
	I1109 14:13:45.842976  263432 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:13:46.013163  263432 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:13:46.013250  263432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:13:46.082069  263432 cri.go:89] found id: "3cc885138bbff8c06fe86888dc349ca498bcd78c2f1689ce8515d50fe2d3d302"
	I1109 14:13:46.082096  263432 cri.go:89] found id: "715a12e55ec25ed041af2464c61ec016d0a834599330b4a3c827207d39b46446"
	I1109 14:13:46.082103  263432 cri.go:89] found id: "d406a9ea73e73d73cdb87a434fe1a06c0466bce9b5cb85caaac981129659bac7"
	I1109 14:13:46.082107  263432 cri.go:89] found id: "0d9225a0084fd2c65edc6252bc7dcfd828517bdfd189beecc698239ca133276e"
	I1109 14:13:46.082119  263432 cri.go:89] found id: "2972477eef10b1150da7a7877e662c3ece69fa35d977d11ddf7c1dde8d07a06e"
	I1109 14:13:46.082124  263432 cri.go:89] found id: "720328094beb2d632eb2d2a85e9b29dcb52b6079d330d9dd8a4433bc8fb804e2"
	I1109 14:13:46.082129  263432 cri.go:89] found id: "6e5e44f605ea5dfc9e66bf4f541d6f330c095b954f694bbe92385ce144a28bf1"
	I1109 14:13:46.082133  263432 cri.go:89] found id: "3f1aabc8689fc625730c21fb73ae4ca9894c05548c4cc4324892ed483033febf"
	I1109 14:13:46.082137  263432 cri.go:89] found id: "a524a17510d58e32d7372cb39f92a6de4d0886452efe91e57c3b96f7dfcef202"
	I1109 14:13:46.082149  263432 cri.go:89] found id: "d37c6d69927ae26902e570db5834756d1e7a327adf798fd97003893a26aa913c"
	I1109 14:13:46.082157  263432 cri.go:89] found id: "921c56940e256ab0d1b5481518906ba9a29600f43fab3593a3d2209f18ca64cd"
	I1109 14:13:46.082161  263432 cri.go:89] found id: "b819fcdfa94d5a72e23872275d42eb5ba78917d6af9789a2cc366b2d19075ae8"
	I1109 14:13:46.082165  263432 cri.go:89] found id: ""
	I1109 14:13:46.082211  263432 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:13:46.094270  263432 retry.go:31] will retry after 125.803611ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:13:46Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:13:46.220655  263432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:13:46.232921  263432 pause.go:52] kubelet running: false
	I1109 14:13:46.232989  263432 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:13:46.396179  263432 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:13:46.396238  263432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:13:46.468749  263432 cri.go:89] found id: "3cc885138bbff8c06fe86888dc349ca498bcd78c2f1689ce8515d50fe2d3d302"
	I1109 14:13:46.468773  263432 cri.go:89] found id: "715a12e55ec25ed041af2464c61ec016d0a834599330b4a3c827207d39b46446"
	I1109 14:13:46.468779  263432 cri.go:89] found id: "d406a9ea73e73d73cdb87a434fe1a06c0466bce9b5cb85caaac981129659bac7"
	I1109 14:13:46.468784  263432 cri.go:89] found id: "0d9225a0084fd2c65edc6252bc7dcfd828517bdfd189beecc698239ca133276e"
	I1109 14:13:46.468788  263432 cri.go:89] found id: "2972477eef10b1150da7a7877e662c3ece69fa35d977d11ddf7c1dde8d07a06e"
	I1109 14:13:46.468794  263432 cri.go:89] found id: "720328094beb2d632eb2d2a85e9b29dcb52b6079d330d9dd8a4433bc8fb804e2"
	I1109 14:13:46.468798  263432 cri.go:89] found id: "6e5e44f605ea5dfc9e66bf4f541d6f330c095b954f694bbe92385ce144a28bf1"
	I1109 14:13:46.468802  263432 cri.go:89] found id: "3f1aabc8689fc625730c21fb73ae4ca9894c05548c4cc4324892ed483033febf"
	I1109 14:13:46.468806  263432 cri.go:89] found id: "a524a17510d58e32d7372cb39f92a6de4d0886452efe91e57c3b96f7dfcef202"
	I1109 14:13:46.468823  263432 cri.go:89] found id: "d37c6d69927ae26902e570db5834756d1e7a327adf798fd97003893a26aa913c"
	I1109 14:13:46.468831  263432 cri.go:89] found id: "b819fcdfa94d5a72e23872275d42eb5ba78917d6af9789a2cc366b2d19075ae8"
	I1109 14:13:46.468835  263432 cri.go:89] found id: ""
	I1109 14:13:46.468880  263432 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:13:46.481099  263432 retry.go:31] will retry after 479.642527ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:13:46Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:13:46.961821  263432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:13:46.978255  263432 pause.go:52] kubelet running: false
	I1109 14:13:46.978341  263432 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:13:47.162333  263432 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:13:47.162414  263432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:13:47.243662  263432 cri.go:89] found id: "3cc885138bbff8c06fe86888dc349ca498bcd78c2f1689ce8515d50fe2d3d302"
	I1109 14:13:47.243689  263432 cri.go:89] found id: "715a12e55ec25ed041af2464c61ec016d0a834599330b4a3c827207d39b46446"
	I1109 14:13:47.243695  263432 cri.go:89] found id: "d406a9ea73e73d73cdb87a434fe1a06c0466bce9b5cb85caaac981129659bac7"
	I1109 14:13:47.243701  263432 cri.go:89] found id: "0d9225a0084fd2c65edc6252bc7dcfd828517bdfd189beecc698239ca133276e"
	I1109 14:13:47.243706  263432 cri.go:89] found id: "2972477eef10b1150da7a7877e662c3ece69fa35d977d11ddf7c1dde8d07a06e"
	I1109 14:13:47.243711  263432 cri.go:89] found id: "720328094beb2d632eb2d2a85e9b29dcb52b6079d330d9dd8a4433bc8fb804e2"
	I1109 14:13:47.243716  263432 cri.go:89] found id: "6e5e44f605ea5dfc9e66bf4f541d6f330c095b954f694bbe92385ce144a28bf1"
	I1109 14:13:47.243721  263432 cri.go:89] found id: "3f1aabc8689fc625730c21fb73ae4ca9894c05548c4cc4324892ed483033febf"
	I1109 14:13:47.243725  263432 cri.go:89] found id: "a524a17510d58e32d7372cb39f92a6de4d0886452efe91e57c3b96f7dfcef202"
	I1109 14:13:47.243732  263432 cri.go:89] found id: "d37c6d69927ae26902e570db5834756d1e7a327adf798fd97003893a26aa913c"
	I1109 14:13:47.243741  263432 cri.go:89] found id: "b819fcdfa94d5a72e23872275d42eb5ba78917d6af9789a2cc366b2d19075ae8"
	I1109 14:13:47.243745  263432 cri.go:89] found id: ""
	I1109 14:13:47.243790  263432 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:13:47.264144  263432 out.go:203] 
	W1109 14:13:47.270787  263432 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:13:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:13:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 14:13:47.270808  263432 out.go:285] * 
	* 
	W1109 14:13:47.278463  263432 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 14:13:47.280803  263432 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-152932 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-152932
helpers_test.go:243: (dbg) docker inspect no-preload-152932:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf",
	        "Created": "2025-11-09T14:11:31.387722642Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 251011,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:12:49.630034696Z",
	            "FinishedAt": "2025-11-09T14:12:48.783525455Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf/hostname",
	        "HostsPath": "/var/lib/docker/containers/026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf/hosts",
	        "LogPath": "/var/lib/docker/containers/026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf/026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf-json.log",
	        "Name": "/no-preload-152932",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-152932:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-152932",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf",
	                "LowerDir": "/var/lib/docker/overlay2/77d2d24ba23adce1f064070530181e0a04c4153d6213564ce1e1eac896b7ce51-init/diff:/var/lib/docker/overlay2/d0c6d4b4ffbfb6af64ec3408030fc54111fe30d500088a5ae947d598cfc72f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/77d2d24ba23adce1f064070530181e0a04c4153d6213564ce1e1eac896b7ce51/merged",
	                "UpperDir": "/var/lib/docker/overlay2/77d2d24ba23adce1f064070530181e0a04c4153d6213564ce1e1eac896b7ce51/diff",
	                "WorkDir": "/var/lib/docker/overlay2/77d2d24ba23adce1f064070530181e0a04c4153d6213564ce1e1eac896b7ce51/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-152932",
	                "Source": "/var/lib/docker/volumes/no-preload-152932/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-152932",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-152932",
	                "name.minikube.sigs.k8s.io": "no-preload-152932",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab72d7fa1cfa89b38076ff1d8ee57da3d4a0df9ad5b1886eba9660e375d180b1",
	            "SandboxKey": "/var/run/docker/netns/ab72d7fa1cfa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-152932": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:36:28:42:bb:6e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1c509180f30963f7e773167a4898cba178d323dd41609baf99fe1db9a86f38a9",
	                    "EndpointID": "bc74bf022a367a0211de409dff5f1a0d0e0ef80fea8f234fa5868d117501a22a",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-152932",
	                        "026fe7b1acd1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-152932 -n no-preload-152932
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-152932 -n no-preload-152932: exit status 2 (408.880808ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-152932 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-152932 logs -n 25: (1.060295421s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-883873 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-883873       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-152932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-169816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p old-k8s-version-169816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:13 UTC │
	│ stop    │ -p no-preload-152932 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ delete  │ -p cert-expiration-883873                                                                                                                                                                                                                     │ cert-expiration-883873       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p embed-certs-273180 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:13 UTC │
	│ addons  │ enable dashboard -p no-preload-152932 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p no-preload-152932 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p kubernetes-upgrade-755159 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-755159    │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ start   │ -p kubernetes-upgrade-755159 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-755159    │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p kubernetes-upgrade-755159                                                                                                                                                                                                                  │ kubernetes-upgrade-755159    │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p disable-driver-mounts-565545                                                                                                                                                                                                               │ disable-driver-mounts-565545 │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p default-k8s-diff-port-326524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-273180 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ stop    │ -p embed-certs-273180 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ image   │ old-k8s-version-169816 image list --format=json                                                                                                                                                                                               │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ pause   │ -p old-k8s-version-169816 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ delete  │ -p old-k8s-version-169816                                                                                                                                                                                                                     │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p old-k8s-version-169816                                                                                                                                                                                                                     │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p newest-cni-331530 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ image   │ no-preload-152932 image list --format=json                                                                                                                                                                                                    │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ pause   │ -p no-preload-152932 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-273180 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p embed-certs-273180 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:13:47
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:13:47.191040  264151 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:13:47.191290  264151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:13:47.191303  264151 out.go:374] Setting ErrFile to fd 2...
	I1109 14:13:47.191309  264151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:13:47.191631  264151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:13:47.192252  264151 out.go:368] Setting JSON to false
	I1109 14:13:47.193989  264151 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3377,"bootTime":1762694250,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:13:47.194106  264151 start.go:143] virtualization: kvm guest
	I1109 14:13:47.195682  264151 out.go:179] * [embed-certs-273180] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:13:47.196934  264151 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:13:47.196949  264151 notify.go:221] Checking for updates...
	I1109 14:13:47.199813  264151 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:13:47.201419  264151 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:13:47.204207  264151 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 14:13:47.205355  264151 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:13:47.206486  264151 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:13:47.208111  264151 config.go:182] Loaded profile config "embed-certs-273180": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:13:47.208796  264151 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:13:47.236955  264151 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 14:13:47.237074  264151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:13:47.305532  264151 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-09 14:13:47.292768478 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:13:47.305712  264151 docker.go:319] overlay module found
	I1109 14:13:47.307334  264151 out.go:179] * Using the docker driver based on existing profile
	I1109 14:13:47.309245  264151 start.go:309] selected driver: docker
	I1109 14:13:47.309264  264151 start.go:930] validating driver "docker" against &{Name:embed-certs-273180 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-273180 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:13:47.309367  264151 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:13:47.310124  264151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:13:47.380373  264151 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-09 14:13:47.368952216 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:13:47.380744  264151 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:13:47.380779  264151 cni.go:84] Creating CNI manager for ""
	I1109 14:13:47.380836  264151 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:13:47.380876  264151 start.go:353] cluster config:
	{Name:embed-certs-273180 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-273180 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:13:47.386919  264151 out.go:179] * Starting "embed-certs-273180" primary control-plane node in "embed-certs-273180" cluster
	I1109 14:13:47.388053  264151 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:13:47.389139  264151 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:13:47.390116  264151 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:13:47.390140  264151 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 14:13:47.390158  264151 cache.go:65] Caching tarball of preloaded images
	I1109 14:13:47.390205  264151 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:13:47.390261  264151 preload.go:238] Found /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 14:13:47.390278  264151 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:13:47.390371  264151 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/embed-certs-273180/config.json ...
	I1109 14:13:47.416451  264151 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:13:47.416474  264151 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:13:47.416492  264151 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:13:47.416517  264151 start.go:360] acquireMachinesLock for embed-certs-273180: {Name:mk5ccfef789f60e3d67a3edba8bce23983e1d48c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:13:47.416579  264151 start.go:364] duration metric: took 37.88µs to acquireMachinesLock for "embed-certs-273180"
	I1109 14:13:47.416607  264151 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:13:47.416617  264151 fix.go:54] fixHost starting: 
	I1109 14:13:47.416900  264151 cli_runner.go:164] Run: docker container inspect embed-certs-273180 --format={{.State.Status}}
	I1109 14:13:47.436052  264151 fix.go:112] recreateIfNeeded on embed-certs-273180: state=Stopped err=<nil>
	W1109 14:13:47.436078  264151 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 09 14:13:22 no-preload-152932 crio[561]: time="2025-11-09T14:13:22.177585794Z" level=info msg="Started container" PID=1733 containerID=921c56940e256ab0d1b5481518906ba9a29600f43fab3593a3d2209f18ca64cd description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9/dashboard-metrics-scraper id=0861e8d4-4644-4c4a-9362-a117aeffc0b8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d02302c78c3e4e26c3b69c1102d9daeb0d5704650f8ed0963582f46dcf9c256
	Nov 09 14:13:22 no-preload-152932 crio[561]: time="2025-11-09T14:13:22.909001881Z" level=info msg="Removing container: 4daed42f566f829ab80d72226ac2da0a3d6e1dd90196a17fa02ca2b8de5502f9" id=0185a56d-6cc8-4cff-8f3f-37624d122f6d name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:13:22 no-preload-152932 crio[561]: time="2025-11-09T14:13:22.919942325Z" level=info msg="Removed container 4daed42f566f829ab80d72226ac2da0a3d6e1dd90196a17fa02ca2b8de5502f9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9/dashboard-metrics-scraper" id=0185a56d-6cc8-4cff-8f3f-37624d122f6d name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.929916503Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=eb1da186-a028-43e2-ac68-74fd573a8a20 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.930856088Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9ed0954f-057d-41e8-997a-a4ad5b20bebd name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.931788162Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ea9fd231-df6b-43af-9c81-2e735b203436 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.932016001Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.936192049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.936359435Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e91ddbbb2d78c621ae7c7d2524dd217fab063ef7fc82d23ddf504b1d978f638c/merged/etc/passwd: no such file or directory"
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.93639029Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e91ddbbb2d78c621ae7c7d2524dd217fab063ef7fc82d23ddf504b1d978f638c/merged/etc/group: no such file or directory"
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.936678642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.965456729Z" level=info msg="Created container 3cc885138bbff8c06fe86888dc349ca498bcd78c2f1689ce8515d50fe2d3d302: kube-system/storage-provisioner/storage-provisioner" id=ea9fd231-df6b-43af-9c81-2e735b203436 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.96590955Z" level=info msg="Starting container: 3cc885138bbff8c06fe86888dc349ca498bcd78c2f1689ce8515d50fe2d3d302" id=d8f6f0c0-fbc4-4a5e-98f9-27ee342e4796 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.967743834Z" level=info msg="Started container" PID=1747 containerID=3cc885138bbff8c06fe86888dc349ca498bcd78c2f1689ce8515d50fe2d3d302 description=kube-system/storage-provisioner/storage-provisioner id=d8f6f0c0-fbc4-4a5e-98f9-27ee342e4796 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8931f9acc7368bc426d612b499dbcaf52bf5585f0427ac283e594f380439e596
	Nov 09 14:13:44 no-preload-152932 crio[561]: time="2025-11-09T14:13:44.802966802Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=91de5d86-d395-4577-a534-aedc3ea58db2 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:44 no-preload-152932 crio[561]: time="2025-11-09T14:13:44.851673643Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=65810300-6927-4605-9ce2-295037a525dd name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:44 no-preload-152932 crio[561]: time="2025-11-09T14:13:44.852770247Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9/dashboard-metrics-scraper" id=5c8acba4-3c4c-4f50-b365-5b3a6eeabfee name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:13:44 no-preload-152932 crio[561]: time="2025-11-09T14:13:44.852908641Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:44 no-preload-152932 crio[561]: time="2025-11-09T14:13:44.920814657Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:44 no-preload-152932 crio[561]: time="2025-11-09T14:13:44.921252285Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:45 no-preload-152932 crio[561]: time="2025-11-09T14:13:45.025395914Z" level=info msg="Created container d37c6d69927ae26902e570db5834756d1e7a327adf798fd97003893a26aa913c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9/dashboard-metrics-scraper" id=5c8acba4-3c4c-4f50-b365-5b3a6eeabfee name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:13:45 no-preload-152932 crio[561]: time="2025-11-09T14:13:45.026023001Z" level=info msg="Starting container: d37c6d69927ae26902e570db5834756d1e7a327adf798fd97003893a26aa913c" id=43f7ae77-70d9-4b06-82f2-51ee6b41077a name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:13:45 no-preload-152932 crio[561]: time="2025-11-09T14:13:45.02790971Z" level=info msg="Started container" PID=1782 containerID=d37c6d69927ae26902e570db5834756d1e7a327adf798fd97003893a26aa913c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9/dashboard-metrics-scraper id=43f7ae77-70d9-4b06-82f2-51ee6b41077a name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d02302c78c3e4e26c3b69c1102d9daeb0d5704650f8ed0963582f46dcf9c256
	Nov 09 14:13:45 no-preload-152932 crio[561]: time="2025-11-09T14:13:45.978306009Z" level=info msg="Removing container: 921c56940e256ab0d1b5481518906ba9a29600f43fab3593a3d2209f18ca64cd" id=7c835763-10b2-433d-92e4-bea4c1c4cae3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:13:46 no-preload-152932 crio[561]: time="2025-11-09T14:13:46.070348798Z" level=info msg="Removed container 921c56940e256ab0d1b5481518906ba9a29600f43fab3593a3d2209f18ca64cd: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9/dashboard-metrics-scraper" id=7c835763-10b2-433d-92e4-bea4c1c4cae3 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d37c6d69927ae       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           3 seconds ago       Exited              dashboard-metrics-scraper   3                   1d02302c78c3e       dashboard-metrics-scraper-6ffb444bf9-fckr9   kubernetes-dashboard
	3cc885138bbff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   8931f9acc7368       storage-provisioner                          kube-system
	b819fcdfa94d5       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   613fa32a6bb13       kubernetes-dashboard-855c9754f9-gcb5c        kubernetes-dashboard
	c75c2ccc3b87a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   0a9635253e9c7       busybox                                      default
	715a12e55ec25       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           48 seconds ago      Running             coredns                     0                   e9f1d73202ec2       coredns-66bc5c9577-6ssc5                     kube-system
	d406a9ea73e73       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           48 seconds ago      Running             kube-proxy                  0                   ad14b56b22210       kube-proxy-f5tgg                             kube-system
	0d9225a0084fd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   106dd93444a04       kindnet-qk599                                kube-system
	2972477eef10b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   8931f9acc7368       storage-provisioner                          kube-system
	720328094beb2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           51 seconds ago      Running             kube-scheduler              0                   71d02751c2c2a       kube-scheduler-no-preload-152932             kube-system
	6e5e44f605ea5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           51 seconds ago      Running             kube-apiserver              0                   3745afc61c596       kube-apiserver-no-preload-152932             kube-system
	3f1aabc8689fc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           51 seconds ago      Running             etcd                        0                   dc90325ddd482       etcd-no-preload-152932                       kube-system
	a524a17510d58       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           51 seconds ago      Running             kube-controller-manager     0                   673ee3c2d3122       kube-controller-manager-no-preload-152932    kube-system
	
	
	==> coredns [715a12e55ec25ed041af2464c61ec016d0a834599330b4a3c827207d39b46446] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33858 - 64248 "HINFO IN 1859816579354728020.8411571753029712130. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.097275714s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-152932
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-152932
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=no-preload-152932
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_11_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:11:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-152932
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:13:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:13:30 +0000   Sun, 09 Nov 2025 14:11:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:13:30 +0000   Sun, 09 Nov 2025 14:11:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:13:30 +0000   Sun, 09 Nov 2025 14:11:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:13:30 +0000   Sun, 09 Nov 2025 14:12:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-152932
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                32c8871f-6491-4e30-8669-8cd62f18ad7c
	  Boot ID:                    f61b57f8-a461-4dd6-b921-37a11bd88f0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-6ssc5                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-no-preload-152932                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-qk599                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-no-preload-152932              250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-no-preload-152932     200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-f5tgg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-no-preload-152932              100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fckr9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-gcb5c         0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 48s                  kube-proxy       
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s (x8 over 114s)  kubelet          Node no-preload-152932 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s (x8 over 114s)  kubelet          Node no-preload-152932 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s (x8 over 114s)  kubelet          Node no-preload-152932 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    110s                 kubelet          Node no-preload-152932 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  110s                 kubelet          Node no-preload-152932 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     110s                 kubelet          Node no-preload-152932 status is now: NodeHasSufficientPID
	  Normal  Starting                 110s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s                 node-controller  Node no-preload-152932 event: Registered Node no-preload-152932 in Controller
	  Normal  NodeReady                91s                  kubelet          Node no-preload-152932 status is now: NodeReady
	  Normal  Starting                 52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)    kubelet          Node no-preload-152932 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)    kubelet          Node no-preload-152932 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)    kubelet          Node no-preload-152932 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                  node-controller  Node no-preload-152932 event: Registered Node no-preload-152932 in Controller
	
	
	==> dmesg <==
	[  +0.090313] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.571631] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 9 13:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.019189] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023897] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +2.047759] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +4.031604] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +8.447123] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[ +16.382306] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 13:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	
	
	==> etcd [3f1aabc8689fc625730c21fb73ae4ca9894c05548c4cc4324892ed483033febf] <==
	{"level":"warn","ts":"2025-11-09T14:12:59.054065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.061819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.069626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.076956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.083999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.092120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.101455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.109295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.124143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.134514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.151910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.156030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.163726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.171766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.229278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48716","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T14:13:21.020813Z","caller":"traceutil/trace.go:172","msg":"trace[1914487713] transaction","detail":"{read_only:false; response_revision:642; number_of_response:1; }","duration":"131.5651ms","start":"2025-11-09T14:13:20.889227Z","end":"2025-11-09T14:13:21.020792Z","steps":["trace[1914487713] 'process raft request'  (duration: 130.704577ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:13:21.281906Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.681647ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T14:13:21.281986Z","caller":"traceutil/trace.go:172","msg":"trace[962648928] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:642; }","duration":"121.783385ms","start":"2025-11-09T14:13:21.160189Z","end":"2025-11-09T14:13:21.281973Z","steps":["trace[962648928] 'range keys from in-memory index tree'  (duration: 121.637136ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:21.873292Z","caller":"traceutil/trace.go:172","msg":"trace[2101697604] linearizableReadLoop","detail":"{readStateIndex:677; appliedIndex:677; }","duration":"106.195296ms","start":"2025-11-09T14:13:21.767062Z","end":"2025-11-09T14:13:21.873257Z","steps":["trace[2101697604] 'read index received'  (duration: 106.185604ms)","trace[2101697604] 'applied index is now lower than readState.Index'  (duration: 8.079µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:13:21.873435Z","caller":"traceutil/trace.go:172","msg":"trace[1493347500] transaction","detail":"{read_only:false; response_revision:643; number_of_response:1; }","duration":"109.344ms","start":"2025-11-09T14:13:21.764077Z","end":"2025-11-09T14:13:21.873421Z","steps":["trace[1493347500] 'process raft request'  (duration: 109.21024ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:13:21.873523Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.43926ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-6ssc5\" limit:1 ","response":"range_response_count:1 size:5935"}
	{"level":"info","ts":"2025-11-09T14:13:21.873571Z","caller":"traceutil/trace.go:172","msg":"trace[360841311] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-6ssc5; range_end:; response_count:1; response_revision:643; }","duration":"106.507911ms","start":"2025-11-09T14:13:21.767053Z","end":"2025-11-09T14:13:21.873561Z","steps":["trace[360841311] 'agreement among raft nodes before linearized reading'  (duration: 106.316185ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:44.919143Z","caller":"traceutil/trace.go:172","msg":"trace[1055551852] transaction","detail":"{read_only:false; response_revision:668; number_of_response:1; }","duration":"112.187288ms","start":"2025-11-09T14:13:44.806936Z","end":"2025-11-09T14:13:44.919123Z","steps":["trace[1055551852] 'process raft request'  (duration: 112.056045ms)"],"step_count":1}
	2025/11/09 14:13:45 WARNING: [core] [Server #6]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2025/11/09 14:13:45 WARNING: [core] [Server #6]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> kernel <==
	 14:13:48 up 56 min,  0 user,  load average: 4.28, 3.14, 1.97
	Linux no-preload-152932 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0d9225a0084fd2c65edc6252bc7dcfd828517bdfd189beecc698239ca133276e] <==
	I1109 14:13:00.294923       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:13:00.295176       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1109 14:13:00.295292       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:13:00.295305       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:13:00.295324       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:13:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:13:00.587675       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:13:00.587757       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:13:00.587768       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:13:00.588551       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1109 14:13:00.887882       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:13:00.887931       1 metrics.go:72] Registering metrics
	I1109 14:13:00.887991       1 controller.go:711] "Syncing nftables rules"
	I1109 14:13:10.588142       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1109 14:13:10.588216       1 main.go:301] handling current node
	I1109 14:13:20.587833       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1109 14:13:20.587863       1 main.go:301] handling current node
	I1109 14:13:30.588193       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1109 14:13:30.588220       1 main.go:301] handling current node
	I1109 14:13:40.591129       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1109 14:13:40.591168       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6e5e44f605ea5dfc9e66bf4f541d6f330c095b954f694bbe92385ce144a28bf1] <==
	I1109 14:12:59.764527       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:12:59.862779       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:12:59.990008       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:13:00.017765       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:13:00.033622       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:13:00.039216       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:13:00.074151       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.26.66"}
	I1109 14:13:00.085049       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.48.231"}
	I1109 14:13:00.613276       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:13:03.404193       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:13:03.404246       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:13:03.553756       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:13:03.606755       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	{"level":"warn","ts":"2025-11-09T14:13:45.996998Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0012814a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	{"level":"warn","ts":"2025-11-09T14:13:45.996953Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f1d860/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E1109 14:13:45.997086       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1109 14:13:45.997110       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 8.353µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1109 14:13:45.997117       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1109 14:13:45.997115       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 36.366µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1109 14:13:45.997086       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1109 14:13:45.998233       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1109 14:13:45.998261       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1109 14:13:45.998267       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1109 14:13:45.999421       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.420386ms" method="PATCH" path="/api/v1/namespaces/kubernetes-dashboard/events/dashboard-metrics-scraper-6ffb444bf9-fckr9.18765c1ee2283d93" result=null
	E1109 14:13:45.999464       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.549748ms" method="PATCH" path="/api/v1/namespaces/kubernetes-dashboard/pods/dashboard-metrics-scraper-6ffb444bf9-fckr9/status" result=null
	
	
	==> kube-controller-manager [a524a17510d58e32d7372cb39f92a6de4d0886452efe91e57c3b96f7dfcef202] <==
	I1109 14:13:03.017323       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 14:13:03.037487       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1109 14:13:03.048903       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1109 14:13:03.048919       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 14:13:03.048938       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1109 14:13:03.048946       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1109 14:13:03.049256       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1109 14:13:03.049263       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1109 14:13:03.049377       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 14:13:03.049450       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 14:13:03.049511       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:13:03.049885       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:13:03.049938       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 14:13:03.050140       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 14:13:03.050340       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1109 14:13:03.050394       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 14:13:03.051234       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1109 14:13:03.053854       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1109 14:13:03.054678       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1109 14:13:03.056717       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 14:13:03.058935       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 14:13:03.062224       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:13:03.068518       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:13:03.082727       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:13:03.089118       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [d406a9ea73e73d73cdb87a434fe1a06c0466bce9b5cb85caaac981129659bac7] <==
	I1109 14:13:00.193124       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:13:00.258869       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:13:00.359779       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:13:00.359823       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1109 14:13:00.359952       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:13:00.378542       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:13:00.378584       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:13:00.383491       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:13:00.383904       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:13:00.383937       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:13:00.385400       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:13:00.385430       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:13:00.385480       1 config.go:309] "Starting node config controller"
	I1109 14:13:00.385656       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:13:00.385865       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:13:00.385879       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:13:00.386364       1 config.go:200] "Starting service config controller"
	I1109 14:13:00.386380       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:13:00.486407       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1109 14:13:00.486425       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:13:00.486456       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:13:00.486517       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [720328094beb2d632eb2d2a85e9b29dcb52b6079d330d9dd8a4433bc8fb804e2] <==
	I1109 14:12:58.554921       1 serving.go:386] Generated self-signed cert in-memory
	W1109 14:12:59.641159       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 14:12:59.641203       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 14:12:59.641215       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 14:12:59.641224       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 14:12:59.673442       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 14:12:59.673473       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:12:59.676762       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:12:59.676796       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:12:59.677159       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:12:59.677229       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 14:12:59.777020       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:13:03 no-preload-152932 kubelet[700]: I1109 14:13:03.788427     700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b63bef34-a8fe-46c6-b524-40d9292214e9-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-gcb5c\" (UID: \"b63bef34-a8fe-46c6-b524-40d9292214e9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gcb5c"
	Nov 09 14:13:03 no-preload-152932 kubelet[700]: I1109 14:13:03.788503     700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e756462e-7015-46c7-9a0e-ce31a4ea445b-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-fckr9\" (UID: \"e756462e-7015-46c7-9a0e-ce31a4ea445b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9"
	Nov 09 14:13:07 no-preload-152932 kubelet[700]: I1109 14:13:07.873382     700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gcb5c" podStartSLOduration=1.41041836 podStartE2EDuration="4.873364477s" podCreationTimestamp="2025-11-09 14:13:03 +0000 UTC" firstStartedPulling="2025-11-09 14:13:04.010480455 +0000 UTC m=+7.335358433" lastFinishedPulling="2025-11-09 14:13:07.473426587 +0000 UTC m=+10.798304550" observedRunningTime="2025-11-09 14:13:07.872563007 +0000 UTC m=+11.197440988" watchObservedRunningTime="2025-11-09 14:13:07.873364477 +0000 UTC m=+11.198242459"
	Nov 09 14:13:10 no-preload-152932 kubelet[700]: I1109 14:13:10.870021     700 scope.go:117] "RemoveContainer" containerID="6ae0c00fe9ae0913cf0df2d10cf576d71677909b5a45c4943d7f473c304aa446"
	Nov 09 14:13:11 no-preload-152932 kubelet[700]: I1109 14:13:11.874558     700 scope.go:117] "RemoveContainer" containerID="6ae0c00fe9ae0913cf0df2d10cf576d71677909b5a45c4943d7f473c304aa446"
	Nov 09 14:13:11 no-preload-152932 kubelet[700]: I1109 14:13:11.874980     700 scope.go:117] "RemoveContainer" containerID="4daed42f566f829ab80d72226ac2da0a3d6e1dd90196a17fa02ca2b8de5502f9"
	Nov 09 14:13:11 no-preload-152932 kubelet[700]: E1109 14:13:11.875248     700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fckr9_kubernetes-dashboard(e756462e-7015-46c7-9a0e-ce31a4ea445b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9" podUID="e756462e-7015-46c7-9a0e-ce31a4ea445b"
	Nov 09 14:13:12 no-preload-152932 kubelet[700]: I1109 14:13:12.878428     700 scope.go:117] "RemoveContainer" containerID="4daed42f566f829ab80d72226ac2da0a3d6e1dd90196a17fa02ca2b8de5502f9"
	Nov 09 14:13:12 no-preload-152932 kubelet[700]: E1109 14:13:12.878665     700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fckr9_kubernetes-dashboard(e756462e-7015-46c7-9a0e-ce31a4ea445b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9" podUID="e756462e-7015-46c7-9a0e-ce31a4ea445b"
	Nov 09 14:13:21 no-preload-152932 kubelet[700]: I1109 14:13:21.758305     700 scope.go:117] "RemoveContainer" containerID="4daed42f566f829ab80d72226ac2da0a3d6e1dd90196a17fa02ca2b8de5502f9"
	Nov 09 14:13:22 no-preload-152932 kubelet[700]: I1109 14:13:22.907689     700 scope.go:117] "RemoveContainer" containerID="4daed42f566f829ab80d72226ac2da0a3d6e1dd90196a17fa02ca2b8de5502f9"
	Nov 09 14:13:22 no-preload-152932 kubelet[700]: I1109 14:13:22.907897     700 scope.go:117] "RemoveContainer" containerID="921c56940e256ab0d1b5481518906ba9a29600f43fab3593a3d2209f18ca64cd"
	Nov 09 14:13:22 no-preload-152932 kubelet[700]: E1109 14:13:22.908085     700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fckr9_kubernetes-dashboard(e756462e-7015-46c7-9a0e-ce31a4ea445b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9" podUID="e756462e-7015-46c7-9a0e-ce31a4ea445b"
	Nov 09 14:13:30 no-preload-152932 kubelet[700]: I1109 14:13:30.929490     700 scope.go:117] "RemoveContainer" containerID="2972477eef10b1150da7a7877e662c3ece69fa35d977d11ddf7c1dde8d07a06e"
	Nov 09 14:13:31 no-preload-152932 kubelet[700]: I1109 14:13:31.758696     700 scope.go:117] "RemoveContainer" containerID="921c56940e256ab0d1b5481518906ba9a29600f43fab3593a3d2209f18ca64cd"
	Nov 09 14:13:31 no-preload-152932 kubelet[700]: E1109 14:13:31.758891     700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fckr9_kubernetes-dashboard(e756462e-7015-46c7-9a0e-ce31a4ea445b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9" podUID="e756462e-7015-46c7-9a0e-ce31a4ea445b"
	Nov 09 14:13:44 no-preload-152932 kubelet[700]: I1109 14:13:44.802517     700 scope.go:117] "RemoveContainer" containerID="921c56940e256ab0d1b5481518906ba9a29600f43fab3593a3d2209f18ca64cd"
	Nov 09 14:13:45 no-preload-152932 kubelet[700]: I1109 14:13:45.972134     700 scope.go:117] "RemoveContainer" containerID="921c56940e256ab0d1b5481518906ba9a29600f43fab3593a3d2209f18ca64cd"
	Nov 09 14:13:45 no-preload-152932 kubelet[700]: I1109 14:13:45.972381     700 scope.go:117] "RemoveContainer" containerID="d37c6d69927ae26902e570db5834756d1e7a327adf798fd97003893a26aa913c"
	Nov 09 14:13:45 no-preload-152932 kubelet[700]: E1109 14:13:45.972590     700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fckr9_kubernetes-dashboard(e756462e-7015-46c7-9a0e-ce31a4ea445b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9" podUID="e756462e-7015-46c7-9a0e-ce31a4ea445b"
	Nov 09 14:13:45 no-preload-152932 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:13:45 no-preload-152932 kubelet[700]: I1109 14:13:45.989596     700 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 09 14:13:46 no-preload-152932 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:13:46 no-preload-152932 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 09 14:13:46 no-preload-152932 systemd[1]: kubelet.service: Consumed 1.500s CPU time.
	
	
	==> kubernetes-dashboard [b819fcdfa94d5a72e23872275d42eb5ba78917d6af9789a2cc366b2d19075ae8] <==
	2025/11/09 14:13:07 Using namespace: kubernetes-dashboard
	2025/11/09 14:13:07 Using in-cluster config to connect to apiserver
	2025/11/09 14:13:07 Using secret token for csrf signing
	2025/11/09 14:13:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/09 14:13:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/09 14:13:07 Successful initial request to the apiserver, version: v1.34.1
	2025/11/09 14:13:07 Generating JWE encryption key
	2025/11/09 14:13:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/09 14:13:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/09 14:13:07 Initializing JWE encryption key from synchronized object
	2025/11/09 14:13:07 Creating in-cluster Sidecar client
	2025/11/09 14:13:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:13:07 Serving insecurely on HTTP port: 9090
	2025/11/09 14:13:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:13:07 Starting overwatch
	
	
	==> storage-provisioner [2972477eef10b1150da7a7877e662c3ece69fa35d977d11ddf7c1dde8d07a06e] <==
	I1109 14:13:00.152686       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1109 14:13:30.155094       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [3cc885138bbff8c06fe86888dc349ca498bcd78c2f1689ce8515d50fe2d3d302] <==
	I1109 14:13:30.979510       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:13:30.985918       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:13:30.985962       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 14:13:30.987501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:34.442627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:38.703547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:42.301677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:45.356767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:48.379382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:48.383507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:13:48.383663       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:13:48.383799       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-152932_fe504f64-6255-4243-9145-bf058c39575a!
	I1109 14:13:48.383797       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"080c7d7d-4bc8-4b08-b04d-f039e8b65be0", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-152932_fe504f64-6255-4243-9145-bf058c39575a became leader
	W1109 14:13:48.385491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:48.389054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:13:48.484060       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-152932_fe504f64-6255-4243-9145-bf058c39575a!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-152932 -n no-preload-152932
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-152932 -n no-preload-152932: exit status 2 (303.010479ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-152932 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-152932
helpers_test.go:243: (dbg) docker inspect no-preload-152932:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf",
	        "Created": "2025-11-09T14:11:31.387722642Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 251011,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:12:49.630034696Z",
	            "FinishedAt": "2025-11-09T14:12:48.783525455Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf/hostname",
	        "HostsPath": "/var/lib/docker/containers/026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf/hosts",
	        "LogPath": "/var/lib/docker/containers/026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf/026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf-json.log",
	        "Name": "/no-preload-152932",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-152932:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-152932",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "026fe7b1acd104cf5ea2e68ff4b243fe083d20e742b514efda1118e468c96cbf",
	                "LowerDir": "/var/lib/docker/overlay2/77d2d24ba23adce1f064070530181e0a04c4153d6213564ce1e1eac896b7ce51-init/diff:/var/lib/docker/overlay2/d0c6d4b4ffbfb6af64ec3408030fc54111fe30d500088a5ae947d598cfc72f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/77d2d24ba23adce1f064070530181e0a04c4153d6213564ce1e1eac896b7ce51/merged",
	                "UpperDir": "/var/lib/docker/overlay2/77d2d24ba23adce1f064070530181e0a04c4153d6213564ce1e1eac896b7ce51/diff",
	                "WorkDir": "/var/lib/docker/overlay2/77d2d24ba23adce1f064070530181e0a04c4153d6213564ce1e1eac896b7ce51/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-152932",
	                "Source": "/var/lib/docker/volumes/no-preload-152932/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-152932",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-152932",
	                "name.minikube.sigs.k8s.io": "no-preload-152932",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab72d7fa1cfa89b38076ff1d8ee57da3d4a0df9ad5b1886eba9660e375d180b1",
	            "SandboxKey": "/var/run/docker/netns/ab72d7fa1cfa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-152932": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:36:28:42:bb:6e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1c509180f30963f7e773167a4898cba178d323dd41609baf99fe1db9a86f38a9",
	                    "EndpointID": "bc74bf022a367a0211de409dff5f1a0d0e0ef80fea8f234fa5868d117501a22a",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-152932",
	                        "026fe7b1acd1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-152932 -n no-preload-152932
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-152932 -n no-preload-152932: exit status 2 (300.824492ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-152932 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-883873 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-883873       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-152932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-169816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p old-k8s-version-169816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:13 UTC │
	│ stop    │ -p no-preload-152932 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ delete  │ -p cert-expiration-883873                                                                                                                                                                                                                     │ cert-expiration-883873       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p embed-certs-273180 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:13 UTC │
	│ addons  │ enable dashboard -p no-preload-152932 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p no-preload-152932 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p kubernetes-upgrade-755159 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-755159    │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ start   │ -p kubernetes-upgrade-755159 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-755159    │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p kubernetes-upgrade-755159                                                                                                                                                                                                                  │ kubernetes-upgrade-755159    │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p disable-driver-mounts-565545                                                                                                                                                                                                               │ disable-driver-mounts-565545 │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p default-k8s-diff-port-326524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-273180 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ stop    │ -p embed-certs-273180 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ image   │ old-k8s-version-169816 image list --format=json                                                                                                                                                                                               │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ pause   │ -p old-k8s-version-169816 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ delete  │ -p old-k8s-version-169816                                                                                                                                                                                                                     │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p old-k8s-version-169816                                                                                                                                                                                                                     │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p newest-cni-331530 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ image   │ no-preload-152932 image list --format=json                                                                                                                                                                                                    │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ pause   │ -p no-preload-152932 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-273180 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p embed-certs-273180 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:13:47
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:13:47.191040  264151 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:13:47.191290  264151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:13:47.191303  264151 out.go:374] Setting ErrFile to fd 2...
	I1109 14:13:47.191309  264151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:13:47.191631  264151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:13:47.192252  264151 out.go:368] Setting JSON to false
	I1109 14:13:47.193989  264151 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3377,"bootTime":1762694250,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:13:47.194106  264151 start.go:143] virtualization: kvm guest
	I1109 14:13:47.195682  264151 out.go:179] * [embed-certs-273180] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:13:47.196934  264151 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:13:47.196949  264151 notify.go:221] Checking for updates...
	I1109 14:13:47.199813  264151 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:13:47.201419  264151 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:13:47.204207  264151 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 14:13:47.205355  264151 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:13:47.206486  264151 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:13:47.208111  264151 config.go:182] Loaded profile config "embed-certs-273180": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:13:47.208796  264151 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:13:47.236955  264151 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 14:13:47.237074  264151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:13:47.305532  264151 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-09 14:13:47.292768478 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:13:47.305712  264151 docker.go:319] overlay module found
	I1109 14:13:47.307334  264151 out.go:179] * Using the docker driver based on existing profile
	I1109 14:13:47.309245  264151 start.go:309] selected driver: docker
	I1109 14:13:47.309264  264151 start.go:930] validating driver "docker" against &{Name:embed-certs-273180 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-273180 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:13:47.309367  264151 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:13:47.310124  264151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:13:47.380373  264151 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-09 14:13:47.368952216 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:13:47.380744  264151 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:13:47.380779  264151 cni.go:84] Creating CNI manager for ""
	I1109 14:13:47.380836  264151 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:13:47.380876  264151 start.go:353] cluster config:
	{Name:embed-certs-273180 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-273180 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:13:47.386919  264151 out.go:179] * Starting "embed-certs-273180" primary control-plane node in "embed-certs-273180" cluster
	I1109 14:13:47.388053  264151 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:13:47.389139  264151 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:13:47.390116  264151 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:13:47.390140  264151 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 14:13:47.390158  264151 cache.go:65] Caching tarball of preloaded images
	I1109 14:13:47.390205  264151 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:13:47.390261  264151 preload.go:238] Found /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 14:13:47.390278  264151 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:13:47.390371  264151 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/embed-certs-273180/config.json ...
	I1109 14:13:47.416451  264151 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:13:47.416474  264151 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:13:47.416492  264151 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:13:47.416517  264151 start.go:360] acquireMachinesLock for embed-certs-273180: {Name:mk5ccfef789f60e3d67a3edba8bce23983e1d48c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:13:47.416579  264151 start.go:364] duration metric: took 37.88µs to acquireMachinesLock for "embed-certs-273180"
	I1109 14:13:47.416607  264151 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:13:47.416617  264151 fix.go:54] fixHost starting: 
	I1109 14:13:47.416900  264151 cli_runner.go:164] Run: docker container inspect embed-certs-273180 --format={{.State.Status}}
	I1109 14:13:47.436052  264151 fix.go:112] recreateIfNeeded on embed-certs-273180: state=Stopped err=<nil>
	W1109 14:13:47.436078  264151 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 09 14:13:22 no-preload-152932 crio[561]: time="2025-11-09T14:13:22.177585794Z" level=info msg="Started container" PID=1733 containerID=921c56940e256ab0d1b5481518906ba9a29600f43fab3593a3d2209f18ca64cd description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9/dashboard-metrics-scraper id=0861e8d4-4644-4c4a-9362-a117aeffc0b8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d02302c78c3e4e26c3b69c1102d9daeb0d5704650f8ed0963582f46dcf9c256
	Nov 09 14:13:22 no-preload-152932 crio[561]: time="2025-11-09T14:13:22.909001881Z" level=info msg="Removing container: 4daed42f566f829ab80d72226ac2da0a3d6e1dd90196a17fa02ca2b8de5502f9" id=0185a56d-6cc8-4cff-8f3f-37624d122f6d name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:13:22 no-preload-152932 crio[561]: time="2025-11-09T14:13:22.919942325Z" level=info msg="Removed container 4daed42f566f829ab80d72226ac2da0a3d6e1dd90196a17fa02ca2b8de5502f9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9/dashboard-metrics-scraper" id=0185a56d-6cc8-4cff-8f3f-37624d122f6d name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.929916503Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=eb1da186-a028-43e2-ac68-74fd573a8a20 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.930856088Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9ed0954f-057d-41e8-997a-a4ad5b20bebd name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.931788162Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ea9fd231-df6b-43af-9c81-2e735b203436 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.932016001Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.936192049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.936359435Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e91ddbbb2d78c621ae7c7d2524dd217fab063ef7fc82d23ddf504b1d978f638c/merged/etc/passwd: no such file or directory"
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.93639029Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e91ddbbb2d78c621ae7c7d2524dd217fab063ef7fc82d23ddf504b1d978f638c/merged/etc/group: no such file or directory"
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.936678642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.965456729Z" level=info msg="Created container 3cc885138bbff8c06fe86888dc349ca498bcd78c2f1689ce8515d50fe2d3d302: kube-system/storage-provisioner/storage-provisioner" id=ea9fd231-df6b-43af-9c81-2e735b203436 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.96590955Z" level=info msg="Starting container: 3cc885138bbff8c06fe86888dc349ca498bcd78c2f1689ce8515d50fe2d3d302" id=d8f6f0c0-fbc4-4a5e-98f9-27ee342e4796 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:13:30 no-preload-152932 crio[561]: time="2025-11-09T14:13:30.967743834Z" level=info msg="Started container" PID=1747 containerID=3cc885138bbff8c06fe86888dc349ca498bcd78c2f1689ce8515d50fe2d3d302 description=kube-system/storage-provisioner/storage-provisioner id=d8f6f0c0-fbc4-4a5e-98f9-27ee342e4796 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8931f9acc7368bc426d612b499dbcaf52bf5585f0427ac283e594f380439e596
	Nov 09 14:13:44 no-preload-152932 crio[561]: time="2025-11-09T14:13:44.802966802Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=91de5d86-d395-4577-a534-aedc3ea58db2 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:44 no-preload-152932 crio[561]: time="2025-11-09T14:13:44.851673643Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=65810300-6927-4605-9ce2-295037a525dd name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:13:44 no-preload-152932 crio[561]: time="2025-11-09T14:13:44.852770247Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9/dashboard-metrics-scraper" id=5c8acba4-3c4c-4f50-b365-5b3a6eeabfee name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:13:44 no-preload-152932 crio[561]: time="2025-11-09T14:13:44.852908641Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:44 no-preload-152932 crio[561]: time="2025-11-09T14:13:44.920814657Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:44 no-preload-152932 crio[561]: time="2025-11-09T14:13:44.921252285Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:13:45 no-preload-152932 crio[561]: time="2025-11-09T14:13:45.025395914Z" level=info msg="Created container d37c6d69927ae26902e570db5834756d1e7a327adf798fd97003893a26aa913c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9/dashboard-metrics-scraper" id=5c8acba4-3c4c-4f50-b365-5b3a6eeabfee name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:13:45 no-preload-152932 crio[561]: time="2025-11-09T14:13:45.026023001Z" level=info msg="Starting container: d37c6d69927ae26902e570db5834756d1e7a327adf798fd97003893a26aa913c" id=43f7ae77-70d9-4b06-82f2-51ee6b41077a name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:13:45 no-preload-152932 crio[561]: time="2025-11-09T14:13:45.02790971Z" level=info msg="Started container" PID=1782 containerID=d37c6d69927ae26902e570db5834756d1e7a327adf798fd97003893a26aa913c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9/dashboard-metrics-scraper id=43f7ae77-70d9-4b06-82f2-51ee6b41077a name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d02302c78c3e4e26c3b69c1102d9daeb0d5704650f8ed0963582f46dcf9c256
	Nov 09 14:13:45 no-preload-152932 crio[561]: time="2025-11-09T14:13:45.978306009Z" level=info msg="Removing container: 921c56940e256ab0d1b5481518906ba9a29600f43fab3593a3d2209f18ca64cd" id=7c835763-10b2-433d-92e4-bea4c1c4cae3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:13:46 no-preload-152932 crio[561]: time="2025-11-09T14:13:46.070348798Z" level=info msg="Removed container 921c56940e256ab0d1b5481518906ba9a29600f43fab3593a3d2209f18ca64cd: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9/dashboard-metrics-scraper" id=7c835763-10b2-433d-92e4-bea4c1c4cae3 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d37c6d69927ae       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago       Exited              dashboard-metrics-scraper   3                   1d02302c78c3e       dashboard-metrics-scraper-6ffb444bf9-fckr9   kubernetes-dashboard
	3cc885138bbff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   8931f9acc7368       storage-provisioner                          kube-system
	b819fcdfa94d5       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   613fa32a6bb13       kubernetes-dashboard-855c9754f9-gcb5c        kubernetes-dashboard
	c75c2ccc3b87a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   0a9635253e9c7       busybox                                      default
	715a12e55ec25       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   e9f1d73202ec2       coredns-66bc5c9577-6ssc5                     kube-system
	d406a9ea73e73       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           49 seconds ago      Running             kube-proxy                  0                   ad14b56b22210       kube-proxy-f5tgg                             kube-system
	0d9225a0084fd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   106dd93444a04       kindnet-qk599                                kube-system
	2972477eef10b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   8931f9acc7368       storage-provisioner                          kube-system
	720328094beb2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           52 seconds ago      Running             kube-scheduler              0                   71d02751c2c2a       kube-scheduler-no-preload-152932             kube-system
	6e5e44f605ea5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           52 seconds ago      Running             kube-apiserver              0                   3745afc61c596       kube-apiserver-no-preload-152932             kube-system
	3f1aabc8689fc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           52 seconds ago      Running             etcd                        0                   dc90325ddd482       etcd-no-preload-152932                       kube-system
	a524a17510d58       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           52 seconds ago      Running             kube-controller-manager     0                   673ee3c2d3122       kube-controller-manager-no-preload-152932    kube-system
	
	
	==> coredns [715a12e55ec25ed041af2464c61ec016d0a834599330b4a3c827207d39b46446] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33858 - 64248 "HINFO IN 1859816579354728020.8411571753029712130. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.097275714s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-152932
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-152932
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=no-preload-152932
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_11_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:11:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-152932
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:13:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:13:30 +0000   Sun, 09 Nov 2025 14:11:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:13:30 +0000   Sun, 09 Nov 2025 14:11:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:13:30 +0000   Sun, 09 Nov 2025 14:11:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:13:30 +0000   Sun, 09 Nov 2025 14:12:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-152932
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                32c8871f-6491-4e30-8669-8cd62f18ad7c
	  Boot ID:                    f61b57f8-a461-4dd6-b921-37a11bd88f0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-6ssc5                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-no-preload-152932                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-qk599                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-no-preload-152932              250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-no-preload-152932     200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-f5tgg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-no-preload-152932              100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fckr9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-gcb5c         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet          Node no-preload-152932 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet          Node no-preload-152932 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x8 over 116s)  kubelet          Node no-preload-152932 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    112s                 kubelet          Node no-preload-152932 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  112s                 kubelet          Node no-preload-152932 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     112s                 kubelet          Node no-preload-152932 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node no-preload-152932 event: Registered Node no-preload-152932 in Controller
	  Normal  NodeReady                93s                  kubelet          Node no-preload-152932 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)    kubelet          Node no-preload-152932 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)    kubelet          Node no-preload-152932 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)    kubelet          Node no-preload-152932 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                  node-controller  Node no-preload-152932 event: Registered Node no-preload-152932 in Controller
	
	
	==> dmesg <==
	[  +0.090313] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.571631] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 9 13:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.019189] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023897] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +2.047759] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +4.031604] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +8.447123] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[ +16.382306] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 13:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	
	
	==> etcd [3f1aabc8689fc625730c21fb73ae4ca9894c05548c4cc4324892ed483033febf] <==
	{"level":"warn","ts":"2025-11-09T14:12:59.054065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.061819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.069626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.076956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.083999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.092120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.101455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.109295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.124143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.134514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.151910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.156030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.163726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.171766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:12:59.229278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48716","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T14:13:21.020813Z","caller":"traceutil/trace.go:172","msg":"trace[1914487713] transaction","detail":"{read_only:false; response_revision:642; number_of_response:1; }","duration":"131.5651ms","start":"2025-11-09T14:13:20.889227Z","end":"2025-11-09T14:13:21.020792Z","steps":["trace[1914487713] 'process raft request'  (duration: 130.704577ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:13:21.281906Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.681647ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T14:13:21.281986Z","caller":"traceutil/trace.go:172","msg":"trace[962648928] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:642; }","duration":"121.783385ms","start":"2025-11-09T14:13:21.160189Z","end":"2025-11-09T14:13:21.281973Z","steps":["trace[962648928] 'range keys from in-memory index tree'  (duration: 121.637136ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:21.873292Z","caller":"traceutil/trace.go:172","msg":"trace[2101697604] linearizableReadLoop","detail":"{readStateIndex:677; appliedIndex:677; }","duration":"106.195296ms","start":"2025-11-09T14:13:21.767062Z","end":"2025-11-09T14:13:21.873257Z","steps":["trace[2101697604] 'read index received'  (duration: 106.185604ms)","trace[2101697604] 'applied index is now lower than readState.Index'  (duration: 8.079µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:13:21.873435Z","caller":"traceutil/trace.go:172","msg":"trace[1493347500] transaction","detail":"{read_only:false; response_revision:643; number_of_response:1; }","duration":"109.344ms","start":"2025-11-09T14:13:21.764077Z","end":"2025-11-09T14:13:21.873421Z","steps":["trace[1493347500] 'process raft request'  (duration: 109.21024ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:13:21.873523Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.43926ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-6ssc5\" limit:1 ","response":"range_response_count:1 size:5935"}
	{"level":"info","ts":"2025-11-09T14:13:21.873571Z","caller":"traceutil/trace.go:172","msg":"trace[360841311] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-6ssc5; range_end:; response_count:1; response_revision:643; }","duration":"106.507911ms","start":"2025-11-09T14:13:21.767053Z","end":"2025-11-09T14:13:21.873561Z","steps":["trace[360841311] 'agreement among raft nodes before linearized reading'  (duration: 106.316185ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:44.919143Z","caller":"traceutil/trace.go:172","msg":"trace[1055551852] transaction","detail":"{read_only:false; response_revision:668; number_of_response:1; }","duration":"112.187288ms","start":"2025-11-09T14:13:44.806936Z","end":"2025-11-09T14:13:44.919123Z","steps":["trace[1055551852] 'process raft request'  (duration: 112.056045ms)"],"step_count":1}
	2025/11/09 14:13:45 WARNING: [core] [Server #6]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2025/11/09 14:13:45 WARNING: [core] [Server #6]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> kernel <==
	 14:13:50 up 56 min,  0 user,  load average: 4.28, 3.14, 1.97
	Linux no-preload-152932 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0d9225a0084fd2c65edc6252bc7dcfd828517bdfd189beecc698239ca133276e] <==
	I1109 14:13:00.294923       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:13:00.295176       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1109 14:13:00.295292       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:13:00.295305       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:13:00.295324       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:13:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:13:00.587675       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:13:00.587757       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:13:00.587768       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:13:00.588551       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1109 14:13:00.887882       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:13:00.887931       1 metrics.go:72] Registering metrics
	I1109 14:13:00.887991       1 controller.go:711] "Syncing nftables rules"
	I1109 14:13:10.588142       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1109 14:13:10.588216       1 main.go:301] handling current node
	I1109 14:13:20.587833       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1109 14:13:20.587863       1 main.go:301] handling current node
	I1109 14:13:30.588193       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1109 14:13:30.588220       1 main.go:301] handling current node
	I1109 14:13:40.591129       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1109 14:13:40.591168       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6e5e44f605ea5dfc9e66bf4f541d6f330c095b954f694bbe92385ce144a28bf1] <==
	I1109 14:12:59.764527       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:12:59.862779       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:12:59.990008       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:13:00.017765       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:13:00.033622       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:13:00.039216       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:13:00.074151       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.26.66"}
	I1109 14:13:00.085049       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.48.231"}
	I1109 14:13:00.613276       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:13:03.404193       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:13:03.404246       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:13:03.553756       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:13:03.606755       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	{"level":"warn","ts":"2025-11-09T14:13:45.996998Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0012814a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	{"level":"warn","ts":"2025-11-09T14:13:45.996953Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f1d860/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E1109 14:13:45.997086       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1109 14:13:45.997110       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 8.353µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1109 14:13:45.997117       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1109 14:13:45.997115       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 36.366µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1109 14:13:45.997086       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1109 14:13:45.998233       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1109 14:13:45.998261       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1109 14:13:45.998267       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1109 14:13:45.999421       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.420386ms" method="PATCH" path="/api/v1/namespaces/kubernetes-dashboard/events/dashboard-metrics-scraper-6ffb444bf9-fckr9.18765c1ee2283d93" result=null
	E1109 14:13:45.999464       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.549748ms" method="PATCH" path="/api/v1/namespaces/kubernetes-dashboard/pods/dashboard-metrics-scraper-6ffb444bf9-fckr9/status" result=null
	
	
	==> kube-controller-manager [a524a17510d58e32d7372cb39f92a6de4d0886452efe91e57c3b96f7dfcef202] <==
	I1109 14:13:03.017323       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 14:13:03.037487       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1109 14:13:03.048903       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1109 14:13:03.048919       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 14:13:03.048938       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1109 14:13:03.048946       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1109 14:13:03.049256       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1109 14:13:03.049263       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1109 14:13:03.049377       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 14:13:03.049450       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 14:13:03.049511       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:13:03.049885       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:13:03.049938       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 14:13:03.050140       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 14:13:03.050340       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1109 14:13:03.050394       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 14:13:03.051234       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1109 14:13:03.053854       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1109 14:13:03.054678       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1109 14:13:03.056717       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 14:13:03.058935       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 14:13:03.062224       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:13:03.068518       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:13:03.082727       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:13:03.089118       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [d406a9ea73e73d73cdb87a434fe1a06c0466bce9b5cb85caaac981129659bac7] <==
	I1109 14:13:00.193124       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:13:00.258869       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:13:00.359779       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:13:00.359823       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1109 14:13:00.359952       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:13:00.378542       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:13:00.378584       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:13:00.383491       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:13:00.383904       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:13:00.383937       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:13:00.385400       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:13:00.385430       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:13:00.385480       1 config.go:309] "Starting node config controller"
	I1109 14:13:00.385656       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:13:00.385865       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:13:00.385879       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:13:00.386364       1 config.go:200] "Starting service config controller"
	I1109 14:13:00.386380       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:13:00.486407       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1109 14:13:00.486425       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:13:00.486456       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:13:00.486517       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [720328094beb2d632eb2d2a85e9b29dcb52b6079d330d9dd8a4433bc8fb804e2] <==
	I1109 14:12:58.554921       1 serving.go:386] Generated self-signed cert in-memory
	W1109 14:12:59.641159       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 14:12:59.641203       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 14:12:59.641215       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 14:12:59.641224       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 14:12:59.673442       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 14:12:59.673473       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:12:59.676762       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:12:59.676796       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:12:59.677159       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:12:59.677229       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 14:12:59.777020       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:13:03 no-preload-152932 kubelet[700]: I1109 14:13:03.788427     700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b63bef34-a8fe-46c6-b524-40d9292214e9-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-gcb5c\" (UID: \"b63bef34-a8fe-46c6-b524-40d9292214e9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gcb5c"
	Nov 09 14:13:03 no-preload-152932 kubelet[700]: I1109 14:13:03.788503     700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e756462e-7015-46c7-9a0e-ce31a4ea445b-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-fckr9\" (UID: \"e756462e-7015-46c7-9a0e-ce31a4ea445b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9"
	Nov 09 14:13:07 no-preload-152932 kubelet[700]: I1109 14:13:07.873382     700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gcb5c" podStartSLOduration=1.41041836 podStartE2EDuration="4.873364477s" podCreationTimestamp="2025-11-09 14:13:03 +0000 UTC" firstStartedPulling="2025-11-09 14:13:04.010480455 +0000 UTC m=+7.335358433" lastFinishedPulling="2025-11-09 14:13:07.473426587 +0000 UTC m=+10.798304550" observedRunningTime="2025-11-09 14:13:07.872563007 +0000 UTC m=+11.197440988" watchObservedRunningTime="2025-11-09 14:13:07.873364477 +0000 UTC m=+11.198242459"
	Nov 09 14:13:10 no-preload-152932 kubelet[700]: I1109 14:13:10.870021     700 scope.go:117] "RemoveContainer" containerID="6ae0c00fe9ae0913cf0df2d10cf576d71677909b5a45c4943d7f473c304aa446"
	Nov 09 14:13:11 no-preload-152932 kubelet[700]: I1109 14:13:11.874558     700 scope.go:117] "RemoveContainer" containerID="6ae0c00fe9ae0913cf0df2d10cf576d71677909b5a45c4943d7f473c304aa446"
	Nov 09 14:13:11 no-preload-152932 kubelet[700]: I1109 14:13:11.874980     700 scope.go:117] "RemoveContainer" containerID="4daed42f566f829ab80d72226ac2da0a3d6e1dd90196a17fa02ca2b8de5502f9"
	Nov 09 14:13:11 no-preload-152932 kubelet[700]: E1109 14:13:11.875248     700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fckr9_kubernetes-dashboard(e756462e-7015-46c7-9a0e-ce31a4ea445b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9" podUID="e756462e-7015-46c7-9a0e-ce31a4ea445b"
	Nov 09 14:13:12 no-preload-152932 kubelet[700]: I1109 14:13:12.878428     700 scope.go:117] "RemoveContainer" containerID="4daed42f566f829ab80d72226ac2da0a3d6e1dd90196a17fa02ca2b8de5502f9"
	Nov 09 14:13:12 no-preload-152932 kubelet[700]: E1109 14:13:12.878665     700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fckr9_kubernetes-dashboard(e756462e-7015-46c7-9a0e-ce31a4ea445b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9" podUID="e756462e-7015-46c7-9a0e-ce31a4ea445b"
	Nov 09 14:13:21 no-preload-152932 kubelet[700]: I1109 14:13:21.758305     700 scope.go:117] "RemoveContainer" containerID="4daed42f566f829ab80d72226ac2da0a3d6e1dd90196a17fa02ca2b8de5502f9"
	Nov 09 14:13:22 no-preload-152932 kubelet[700]: I1109 14:13:22.907689     700 scope.go:117] "RemoveContainer" containerID="4daed42f566f829ab80d72226ac2da0a3d6e1dd90196a17fa02ca2b8de5502f9"
	Nov 09 14:13:22 no-preload-152932 kubelet[700]: I1109 14:13:22.907897     700 scope.go:117] "RemoveContainer" containerID="921c56940e256ab0d1b5481518906ba9a29600f43fab3593a3d2209f18ca64cd"
	Nov 09 14:13:22 no-preload-152932 kubelet[700]: E1109 14:13:22.908085     700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fckr9_kubernetes-dashboard(e756462e-7015-46c7-9a0e-ce31a4ea445b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9" podUID="e756462e-7015-46c7-9a0e-ce31a4ea445b"
	Nov 09 14:13:30 no-preload-152932 kubelet[700]: I1109 14:13:30.929490     700 scope.go:117] "RemoveContainer" containerID="2972477eef10b1150da7a7877e662c3ece69fa35d977d11ddf7c1dde8d07a06e"
	Nov 09 14:13:31 no-preload-152932 kubelet[700]: I1109 14:13:31.758696     700 scope.go:117] "RemoveContainer" containerID="921c56940e256ab0d1b5481518906ba9a29600f43fab3593a3d2209f18ca64cd"
	Nov 09 14:13:31 no-preload-152932 kubelet[700]: E1109 14:13:31.758891     700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fckr9_kubernetes-dashboard(e756462e-7015-46c7-9a0e-ce31a4ea445b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9" podUID="e756462e-7015-46c7-9a0e-ce31a4ea445b"
	Nov 09 14:13:44 no-preload-152932 kubelet[700]: I1109 14:13:44.802517     700 scope.go:117] "RemoveContainer" containerID="921c56940e256ab0d1b5481518906ba9a29600f43fab3593a3d2209f18ca64cd"
	Nov 09 14:13:45 no-preload-152932 kubelet[700]: I1109 14:13:45.972134     700 scope.go:117] "RemoveContainer" containerID="921c56940e256ab0d1b5481518906ba9a29600f43fab3593a3d2209f18ca64cd"
	Nov 09 14:13:45 no-preload-152932 kubelet[700]: I1109 14:13:45.972381     700 scope.go:117] "RemoveContainer" containerID="d37c6d69927ae26902e570db5834756d1e7a327adf798fd97003893a26aa913c"
	Nov 09 14:13:45 no-preload-152932 kubelet[700]: E1109 14:13:45.972590     700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fckr9_kubernetes-dashboard(e756462e-7015-46c7-9a0e-ce31a4ea445b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fckr9" podUID="e756462e-7015-46c7-9a0e-ce31a4ea445b"
	Nov 09 14:13:45 no-preload-152932 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:13:45 no-preload-152932 kubelet[700]: I1109 14:13:45.989596     700 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 09 14:13:46 no-preload-152932 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:13:46 no-preload-152932 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 09 14:13:46 no-preload-152932 systemd[1]: kubelet.service: Consumed 1.500s CPU time.
	
	
	==> kubernetes-dashboard [b819fcdfa94d5a72e23872275d42eb5ba78917d6af9789a2cc366b2d19075ae8] <==
	2025/11/09 14:13:07 Using namespace: kubernetes-dashboard
	2025/11/09 14:13:07 Using in-cluster config to connect to apiserver
	2025/11/09 14:13:07 Using secret token for csrf signing
	2025/11/09 14:13:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/09 14:13:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/09 14:13:07 Successful initial request to the apiserver, version: v1.34.1
	2025/11/09 14:13:07 Generating JWE encryption key
	2025/11/09 14:13:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/09 14:13:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/09 14:13:07 Initializing JWE encryption key from synchronized object
	2025/11/09 14:13:07 Creating in-cluster Sidecar client
	2025/11/09 14:13:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:13:07 Serving insecurely on HTTP port: 9090
	2025/11/09 14:13:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:13:07 Starting overwatch
	
	
	==> storage-provisioner [2972477eef10b1150da7a7877e662c3ece69fa35d977d11ddf7c1dde8d07a06e] <==
	I1109 14:13:00.152686       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1109 14:13:30.155094       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [3cc885138bbff8c06fe86888dc349ca498bcd78c2f1689ce8515d50fe2d3d302] <==
	I1109 14:13:30.979510       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:13:30.985918       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:13:30.985962       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 14:13:30.987501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:34.442627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:38.703547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:42.301677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:45.356767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:48.379382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:48.383507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:13:48.383663       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:13:48.383799       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-152932_fe504f64-6255-4243-9145-bf058c39575a!
	I1109 14:13:48.383797       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"080c7d7d-4bc8-4b08-b04d-f039e8b65be0", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-152932_fe504f64-6255-4243-9145-bf058c39575a became leader
	W1109 14:13:48.385491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:48.389054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:13:48.484060       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-152932_fe504f64-6255-4243-9145-bf058c39575a!
	W1109 14:13:50.392232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:13:50.398574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-152932 -n no-preload-152932
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-152932 -n no-preload-152932: exit status 2 (330.225241ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-152932 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-331530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-331530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (240.352949ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:14:12Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-331530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-331530
helpers_test.go:243: (dbg) docker inspect newest-cni-331530:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa",
	        "Created": "2025-11-09T14:13:46.9311742Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 264048,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:13:46.973373905Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa/hostname",
	        "HostsPath": "/var/lib/docker/containers/b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa/hosts",
	        "LogPath": "/var/lib/docker/containers/b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa/b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa-json.log",
	        "Name": "/newest-cni-331530",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-331530:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-331530",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa",
	                "LowerDir": "/var/lib/docker/overlay2/270c7912b32411650ef3bf9dcee1cc3fe1f7e282c01edb904b3e13882047bd17-init/diff:/var/lib/docker/overlay2/d0c6d4b4ffbfb6af64ec3408030fc54111fe30d500088a5ae947d598cfc72f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/270c7912b32411650ef3bf9dcee1cc3fe1f7e282c01edb904b3e13882047bd17/merged",
	                "UpperDir": "/var/lib/docker/overlay2/270c7912b32411650ef3bf9dcee1cc3fe1f7e282c01edb904b3e13882047bd17/diff",
	                "WorkDir": "/var/lib/docker/overlay2/270c7912b32411650ef3bf9dcee1cc3fe1f7e282c01edb904b3e13882047bd17/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-331530",
	                "Source": "/var/lib/docker/volumes/newest-cni-331530/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-331530",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-331530",
	                "name.minikube.sigs.k8s.io": "newest-cni-331530",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "35755e174bf47435029d312095bff1c24d715f060ead69e95ee24f3a33d21ebf",
	            "SandboxKey": "/var/run/docker/netns/35755e174bf4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-331530": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:eb:2e:66:ef:99",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "48111e278cbe43aa4a69b8079dbb61289459a16d778ee4d9d738546cd26897c8",
	                    "EndpointID": "394adc089cc986e16474fefd0d83f15b76bc8499f368526e243696f5000c3c85",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-331530",
	                        "b0c3dbe7b9b7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-331530 -n newest-cni-331530
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-331530 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p no-preload-152932 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ delete  │ -p cert-expiration-883873                                                                                                                                                                                                                     │ cert-expiration-883873       │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p embed-certs-273180 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:13 UTC │
	│ addons  │ enable dashboard -p no-preload-152932 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:12 UTC │
	│ start   │ -p no-preload-152932 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:12 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p kubernetes-upgrade-755159 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-755159    │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ start   │ -p kubernetes-upgrade-755159 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-755159    │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p kubernetes-upgrade-755159                                                                                                                                                                                                                  │ kubernetes-upgrade-755159    │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p disable-driver-mounts-565545                                                                                                                                                                                                               │ disable-driver-mounts-565545 │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p default-k8s-diff-port-326524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-273180 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ stop    │ -p embed-certs-273180 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ image   │ old-k8s-version-169816 image list --format=json                                                                                                                                                                                               │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ pause   │ -p old-k8s-version-169816 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ delete  │ -p old-k8s-version-169816                                                                                                                                                                                                                     │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p old-k8s-version-169816                                                                                                                                                                                                                     │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p newest-cni-331530 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:14 UTC │
	│ image   │ no-preload-152932 image list --format=json                                                                                                                                                                                                    │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ pause   │ -p no-preload-152932 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-273180 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p embed-certs-273180 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ delete  │ -p no-preload-152932                                                                                                                                                                                                                          │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p no-preload-152932                                                                                                                                                                                                                          │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p auto-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-331530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:13:53
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:13:53.956085  268505 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:13:53.956192  268505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:13:53.956201  268505 out.go:374] Setting ErrFile to fd 2...
	I1109 14:13:53.956205  268505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:13:53.956395  268505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:13:53.956910  268505 out.go:368] Setting JSON to false
	I1109 14:13:53.958180  268505 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3384,"bootTime":1762694250,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:13:53.958260  268505 start.go:143] virtualization: kvm guest
	I1109 14:13:53.960113  268505 out.go:179] * [auto-593530] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:13:53.961332  268505 notify.go:221] Checking for updates...
	I1109 14:13:53.961348  268505 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:13:53.962469  268505 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:13:53.963603  268505 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:13:53.964757  268505 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 14:13:53.965989  268505 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:13:53.967390  268505 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:13:53.969134  268505 config.go:182] Loaded profile config "default-k8s-diff-port-326524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:13:53.969274  268505 config.go:182] Loaded profile config "embed-certs-273180": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:13:53.969400  268505 config.go:182] Loaded profile config "newest-cni-331530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:13:53.969504  268505 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:13:53.996293  268505 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 14:13:53.996431  268505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:13:54.057747  268505 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-09 14:13:54.046548862 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:13:54.057890  268505 docker.go:319] overlay module found
	I1109 14:13:54.059689  268505 out.go:179] * Using the docker driver based on user configuration
	I1109 14:13:54.060750  268505 start.go:309] selected driver: docker
	I1109 14:13:54.060763  268505 start.go:930] validating driver "docker" against <nil>
	I1109 14:13:54.060776  268505 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:13:54.061536  268505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:13:54.119676  268505 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-09 14:13:54.109694631 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:13:54.119923  268505 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 14:13:54.120216  268505 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:13:54.125057  268505 out.go:179] * Using Docker driver with root privileges
	I1109 14:13:54.126741  268505 cni.go:84] Creating CNI manager for ""
	I1109 14:13:54.126821  268505 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:13:54.126834  268505 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 14:13:54.126948  268505 start.go:353] cluster config:
	{Name:auto-593530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-593530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1109 14:13:54.128327  268505 out.go:179] * Starting "auto-593530" primary control-plane node in "auto-593530" cluster
	I1109 14:13:54.129405  268505 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:13:54.130538  268505 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:13:54.131524  268505 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:13:54.131558  268505 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 14:13:54.131569  268505 cache.go:65] Caching tarball of preloaded images
	I1109 14:13:54.131613  268505 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:13:54.131693  268505 preload.go:238] Found /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 14:13:54.131715  268505 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:13:54.131841  268505 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/config.json ...
	I1109 14:13:54.131869  268505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/config.json: {Name:mk8689229eca143b949fc2ab4268665672edfedb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:54.153537  268505 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:13:54.153555  268505 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:13:54.153572  268505 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:13:54.153602  268505 start.go:360] acquireMachinesLock for auto-593530: {Name:mk17d4bffbeed9def9c153e0fa16e3ef83a089e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:13:54.153706  268505 start.go:364] duration metric: took 87.427µs to acquireMachinesLock for "auto-593530"
	I1109 14:13:54.153733  268505 start.go:93] Provisioning new machine with config: &{Name:auto-593530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-593530 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:13:54.153816  268505 start.go:125] createHost starting for "" (driver="docker")
	I1109 14:13:53.293824  264151 cli_runner.go:164] Run: docker network inspect embed-certs-273180 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:13:53.311255  264151 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1109 14:13:53.315499  264151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:13:53.326983  264151 kubeadm.go:884] updating cluster {Name:embed-certs-273180 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-273180 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:13:53.327086  264151 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:13:53.327139  264151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:13:53.362806  264151 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:13:53.362828  264151 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:13:53.362878  264151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:13:53.389818  264151 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:13:53.389840  264151 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:13:53.389848  264151 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1109 14:13:53.389962  264151 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-273180 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-273180 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:13:53.390034  264151 ssh_runner.go:195] Run: crio config
	I1109 14:13:53.446560  264151 cni.go:84] Creating CNI manager for ""
	I1109 14:13:53.446581  264151 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:13:53.446594  264151 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:13:53.446621  264151 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-273180 NodeName:embed-certs-273180 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:13:53.446804  264151 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-273180"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:13:53.446868  264151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:13:53.455583  264151 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:13:53.455661  264151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:13:53.463718  264151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1109 14:13:53.494070  264151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:13:53.509257  264151 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1109 14:13:53.522389  264151 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:13:53.525981  264151 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:13:53.535509  264151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:13:53.624566  264151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:13:53.649308  264151 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/embed-certs-273180 for IP: 192.168.94.2
	I1109 14:13:53.649328  264151 certs.go:195] generating shared ca certs ...
	I1109 14:13:53.649348  264151 certs.go:227] acquiring lock for ca certs: {Name:mk1fbc7fb5aaf0e87090d75145f91c095ef07289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:53.649497  264151 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key
	I1109 14:13:53.649566  264151 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key
	I1109 14:13:53.649581  264151 certs.go:257] generating profile certs ...
	I1109 14:13:53.649729  264151 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/embed-certs-273180/client.key
	I1109 14:13:53.649807  264151 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/embed-certs-273180/apiserver.key.b47b3945
	I1109 14:13:53.649862  264151 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/embed-certs-273180/proxy-client.key
	I1109 14:13:53.650005  264151 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem (1338 bytes)
	W1109 14:13:53.650049  264151 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365_empty.pem, impossibly tiny 0 bytes
	I1109 14:13:53.650064  264151 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:13:53.650102  264151 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 14:13:53.650146  264151 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:13:53.650179  264151 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 14:13:53.650236  264151 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:13:53.651115  264151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:13:53.671278  264151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:13:53.691315  264151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:13:53.714932  264151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:13:53.740414  264151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/embed-certs-273180/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1109 14:13:53.767245  264151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/embed-certs-273180/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:13:53.785326  264151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/embed-certs-273180/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:13:53.805208  264151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/embed-certs-273180/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:13:53.823234  264151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /usr/share/ca-certificates/93652.pem (1708 bytes)
	I1109 14:13:53.842088  264151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:13:53.861094  264151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem --> /usr/share/ca-certificates/9365.pem (1338 bytes)
	I1109 14:13:53.880226  264151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:13:53.892248  264151 ssh_runner.go:195] Run: openssl version
	I1109 14:13:53.898504  264151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9365.pem && ln -fs /usr/share/ca-certificates/9365.pem /etc/ssl/certs/9365.pem"
	I1109 14:13:53.906521  264151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9365.pem
	I1109 14:13:53.910127  264151 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:34 /usr/share/ca-certificates/9365.pem
	I1109 14:13:53.910185  264151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9365.pem
	I1109 14:13:53.948725  264151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9365.pem /etc/ssl/certs/51391683.0"
	I1109 14:13:53.956977  264151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93652.pem && ln -fs /usr/share/ca-certificates/93652.pem /etc/ssl/certs/93652.pem"
	I1109 14:13:53.965729  264151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93652.pem
	I1109 14:13:53.969545  264151 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:34 /usr/share/ca-certificates/93652.pem
	I1109 14:13:53.969582  264151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93652.pem
	I1109 14:13:54.011386  264151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93652.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:13:54.022699  264151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:13:54.032718  264151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:13:54.037243  264151 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:13:54.037368  264151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:13:54.079707  264151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:13:54.091036  264151 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:13:54.095654  264151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:13:54.136799  264151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:13:54.180932  264151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:13:54.229236  264151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:13:54.281260  264151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:13:54.337596  264151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:13:54.396602  264151 kubeadm.go:401] StartCluster: {Name:embed-certs-273180 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-273180 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:13:54.396880  264151 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:13:54.396986  264151 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:13:54.436136  264151 cri.go:89] found id: "4f9f38d1a0f6c3b90459b53a1a0308e519ef2d1e4f12c24e072989aa297eab6c"
	I1109 14:13:54.436198  264151 cri.go:89] found id: "97f5a7b8e8b2ec193df908b13853b3f0d95619f6cc39fc3c693bf5f008f98071"
	I1109 14:13:54.436205  264151 cri.go:89] found id: "976a1e86747e59d5a7c8cdbc6eaef9d6d0fde3a08e20706cee6160921ddf6689"
	I1109 14:13:54.436210  264151 cri.go:89] found id: "9736e800f3ad26c7d4d7a6c889abcad2a30ef0f3907128567e28dbcdd9a9355e"
	I1109 14:13:54.436221  264151 cri.go:89] found id: ""
	I1109 14:13:54.436264  264151 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 14:13:54.451064  264151 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:13:54Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:13:54.451129  264151 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:13:54.461402  264151 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:13:54.461423  264151 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:13:54.461482  264151 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:13:54.472822  264151 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:13:54.473479  264151 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-273180" does not appear in /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:13:54.473841  264151 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-5854/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-273180" cluster setting kubeconfig missing "embed-certs-273180" context setting]
	I1109 14:13:54.474407  264151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:54.476046  264151 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:13:54.485739  264151 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1109 14:13:54.485786  264151 kubeadm.go:602] duration metric: took 24.340885ms to restartPrimaryControlPlane
	I1109 14:13:54.485797  264151 kubeadm.go:403] duration metric: took 89.204814ms to StartCluster
	I1109 14:13:54.485814  264151 settings.go:142] acquiring lock: {Name:mk4e77b290eba64589404bd2a5f48c72505ab262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:54.485867  264151 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:13:54.487807  264151 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:54.488312  264151 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:13:54.488402  264151 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:13:54.488790  264151 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-273180"
	I1109 14:13:54.488811  264151 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-273180"
	I1109 14:13:54.488835  264151 addons.go:70] Setting dashboard=true in profile "embed-certs-273180"
	I1109 14:13:54.489100  264151 addons.go:70] Setting default-storageclass=true in profile "embed-certs-273180"
	I1109 14:13:54.489122  264151 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-273180"
	I1109 14:13:54.489527  264151 addons.go:239] Setting addon dashboard=true in "embed-certs-273180"
	W1109 14:13:54.489539  264151 addons.go:248] addon dashboard should already be in state true
	I1109 14:13:54.489564  264151 host.go:66] Checking if "embed-certs-273180" exists ...
	I1109 14:13:54.489954  264151 config.go:182] Loaded profile config "embed-certs-273180": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:13:54.490033  264151 cli_runner.go:164] Run: docker container inspect embed-certs-273180 --format={{.State.Status}}
	W1109 14:13:54.488819  264151 addons.go:248] addon storage-provisioner should already be in state true
	I1109 14:13:54.490117  264151 host.go:66] Checking if "embed-certs-273180" exists ...
	I1109 14:13:54.490264  264151 cli_runner.go:164] Run: docker container inspect embed-certs-273180 --format={{.State.Status}}
	I1109 14:13:54.490560  264151 cli_runner.go:164] Run: docker container inspect embed-certs-273180 --format={{.State.Status}}
	I1109 14:13:54.495352  264151 out.go:179] * Verifying Kubernetes components...
	I1109 14:13:54.496760  264151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:13:54.528403  264151 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:13:54.529438  264151 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1109 14:13:54.529673  264151 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:13:54.529705  264151 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:13:54.529801  264151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-273180
	I1109 14:13:54.531895  264151 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1109 14:13:53.383123  262672 kubeadm.go:884] updating cluster {Name:newest-cni-331530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-331530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:13:53.383262  262672 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:13:53.383325  262672 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:13:53.420402  262672 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:13:53.420425  262672 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:13:53.420479  262672 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:13:53.448422  262672 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:13:53.448442  262672 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:13:53.448452  262672 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1109 14:13:53.448543  262672 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-331530 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-331530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:13:53.448622  262672 ssh_runner.go:195] Run: crio config
	I1109 14:13:53.496951  262672 cni.go:84] Creating CNI manager for ""
	I1109 14:13:53.496971  262672 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:13:53.496990  262672 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1109 14:13:53.497019  262672 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-331530 NodeName:newest-cni-331530 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:13:53.497170  262672 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-331530"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:13:53.497240  262672 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:13:53.506361  262672 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:13:53.506424  262672 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:13:53.514971  262672 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1109 14:13:53.527462  262672 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:13:53.543211  262672 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1109 14:13:53.555207  262672 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:13:53.559039  262672 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:13:53.572169  262672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:13:53.658477  262672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:13:53.679389  262672 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530 for IP: 192.168.76.2
	I1109 14:13:53.679409  262672 certs.go:195] generating shared ca certs ...
	I1109 14:13:53.679425  262672 certs.go:227] acquiring lock for ca certs: {Name:mk1fbc7fb5aaf0e87090d75145f91c095ef07289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:53.679536  262672 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key
	I1109 14:13:53.679583  262672 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key
	I1109 14:13:53.679592  262672 certs.go:257] generating profile certs ...
	I1109 14:13:53.679650  262672 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/client.key
	I1109 14:13:53.679667  262672 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/client.crt with IP's: []
	I1109 14:13:54.364261  262672 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/client.crt ...
	I1109 14:13:54.364296  262672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/client.crt: {Name:mkcaef1ac950180b67d47c6fc84f2b1db311c0eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:54.364470  262672 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/client.key ...
	I1109 14:13:54.364487  262672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/client.key: {Name:mkee2cb6bdd4d58beeb8b25f2bb5b92e078eb06a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:54.364635  262672 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.key.5fb0b4cb
	I1109 14:13:54.364668  262672 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.crt.5fb0b4cb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1109 14:13:54.802246  262672 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.crt.5fb0b4cb ...
	I1109 14:13:54.802281  262672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.crt.5fb0b4cb: {Name:mkc5898b24579e29b13ed60227dc2791c2ea18c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:54.802487  262672 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.key.5fb0b4cb ...
	I1109 14:13:54.802510  262672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.key.5fb0b4cb: {Name:mk94363041a7e6f23d0728480d20cf6f92e2e879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:54.802621  262672 certs.go:382] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.crt.5fb0b4cb -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.crt
	I1109 14:13:54.802749  262672 certs.go:386] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.key.5fb0b4cb -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.key
	I1109 14:13:54.802842  262672 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/proxy-client.key
	I1109 14:13:54.802868  262672 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/proxy-client.crt with IP's: []
	I1109 14:13:55.352289  262672 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/proxy-client.crt ...
	I1109 14:13:55.352352  262672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/proxy-client.crt: {Name:mk8358af3a0700bdbd0895030c5202aa77d811b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:55.352544  262672 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/proxy-client.key ...
	I1109 14:13:55.352564  262672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/proxy-client.key: {Name:mk482ca78d14cef42f771fa15573958882f003a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:13:55.352819  262672 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem (1338 bytes)
	W1109 14:13:55.352898  262672 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365_empty.pem, impossibly tiny 0 bytes
	I1109 14:13:55.352915  262672 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:13:55.352962  262672 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 14:13:55.353007  262672 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:13:55.353041  262672 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 14:13:55.353101  262672 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:13:55.353946  262672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:13:55.378120  262672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:13:55.399762  262672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:13:55.421542  262672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:13:55.444323  262672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1109 14:13:55.466539  262672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:13:55.489061  262672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:13:55.514982  262672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 14:13:55.537752  262672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:13:55.561019  262672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem --> /usr/share/ca-certificates/9365.pem (1338 bytes)
	I1109 14:13:55.583655  262672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /usr/share/ca-certificates/93652.pem (1708 bytes)
	I1109 14:13:55.603426  262672 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:13:55.617564  262672 ssh_runner.go:195] Run: openssl version
	I1109 14:13:55.624542  262672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9365.pem && ln -fs /usr/share/ca-certificates/9365.pem /etc/ssl/certs/9365.pem"
	I1109 14:13:55.634299  262672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9365.pem
	I1109 14:13:55.638454  262672 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:34 /usr/share/ca-certificates/9365.pem
	I1109 14:13:55.638506  262672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9365.pem
	I1109 14:13:55.676817  262672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9365.pem /etc/ssl/certs/51391683.0"
	I1109 14:13:55.690075  262672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93652.pem && ln -fs /usr/share/ca-certificates/93652.pem /etc/ssl/certs/93652.pem"
	I1109 14:13:55.703019  262672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93652.pem
	I1109 14:13:55.706973  262672 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:34 /usr/share/ca-certificates/93652.pem
	I1109 14:13:55.707027  262672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93652.pem
	I1109 14:13:55.742855  262672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93652.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:13:55.752163  262672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:13:55.761506  262672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:13:55.766715  262672 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:13:55.766771  262672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:13:55.818401  262672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:13:55.829953  262672 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:13:55.834614  262672 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:13:55.834739  262672 kubeadm.go:401] StartCluster: {Name:newest-cni-331530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-331530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:13:55.834829  262672 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:13:55.834878  262672 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:13:55.871673  262672 cri.go:89] found id: ""
	I1109 14:13:55.871736  262672 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:13:55.882100  262672 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:13:55.891423  262672 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:13:55.891538  262672 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:13:55.899687  262672 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:13:55.899719  262672 kubeadm.go:158] found existing configuration files:
	
	I1109 14:13:55.899764  262672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 14:13:55.908990  262672 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:13:55.909045  262672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:13:55.916603  262672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 14:13:55.924800  262672 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:13:55.924861  262672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:13:55.933290  262672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 14:13:55.941218  262672 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:13:55.941289  262672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:13:55.949408  262672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 14:13:55.958084  262672 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:13:55.958148  262672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:13:55.967993  262672 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:13:56.048216  262672 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1109 14:13:56.153906  262672 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 14:13:54.532895  264151 addons.go:239] Setting addon default-storageclass=true in "embed-certs-273180"
	W1109 14:13:54.532921  264151 addons.go:248] addon default-storageclass should already be in state true
	I1109 14:13:54.532948  264151 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1109 14:13:54.532952  264151 host.go:66] Checking if "embed-certs-273180" exists ...
	I1109 14:13:54.532962  264151 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1109 14:13:54.533010  264151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-273180
	I1109 14:13:54.533449  264151 cli_runner.go:164] Run: docker container inspect embed-certs-273180 --format={{.State.Status}}
	I1109 14:13:54.575467  264151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/embed-certs-273180/id_rsa Username:docker}
	I1109 14:13:54.581308  264151 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:13:54.581327  264151 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:13:54.581378  264151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-273180
	I1109 14:13:54.582671  264151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/embed-certs-273180/id_rsa Username:docker}
	I1109 14:13:54.606811  264151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/embed-certs-273180/id_rsa Username:docker}
	I1109 14:13:54.682611  264151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:13:54.712002  264151 node_ready.go:35] waiting up to 6m0s for node "embed-certs-273180" to be "Ready" ...
	I1109 14:13:54.715435  264151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:13:54.727418  264151 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1109 14:13:54.727442  264151 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1109 14:13:54.738009  264151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:13:54.750632  264151 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1109 14:13:54.750664  264151 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1109 14:13:54.790108  264151 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1109 14:13:54.790140  264151 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1109 14:13:54.822193  264151 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1109 14:13:54.822219  264151 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1109 14:13:54.854232  264151 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1109 14:13:54.854259  264151 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1109 14:13:54.880240  264151 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1109 14:13:54.880261  264151 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1109 14:13:54.899974  264151 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1109 14:13:54.900012  264151 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1109 14:13:54.927090  264151 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1109 14:13:54.927134  264151 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1109 14:13:54.943359  264151 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:13:54.943384  264151 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1109 14:13:54.959515  264151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:13:56.212865  264151 node_ready.go:49] node "embed-certs-273180" is "Ready"
	I1109 14:13:56.212895  264151 node_ready.go:38] duration metric: took 1.500863437s for node "embed-certs-273180" to be "Ready" ...
	I1109 14:13:56.212910  264151 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:13:56.212969  264151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:13:56.820192  264151 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.104718472s)
	I1109 14:13:56.820200  264151 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.082155125s)
	I1109 14:13:56.820333  264151 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.860787035s)
	I1109 14:13:56.820375  264151 api_server.go:72] duration metric: took 2.331744418s to wait for apiserver process to appear ...
	I1109 14:13:56.820388  264151 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:13:56.820409  264151 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1109 14:13:56.824738  264151 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-273180 addons enable metrics-server
	
	I1109 14:13:56.827794  264151 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:13:56.827816  264151 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:13:56.833175  264151 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1109 14:13:53.274593  256773 node_ready.go:57] node "default-k8s-diff-port-326524" has "Ready":"False" status (will retry)
	W1109 14:13:55.275107  256773 node_ready.go:57] node "default-k8s-diff-port-326524" has "Ready":"False" status (will retry)
	I1109 14:13:56.834257  264151 addons.go:515] duration metric: took 2.345860185s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1109 14:13:54.155313  268505 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1109 14:13:54.155504  268505 start.go:159] libmachine.API.Create for "auto-593530" (driver="docker")
	I1109 14:13:54.155535  268505 client.go:173] LocalClient.Create starting
	I1109 14:13:54.155611  268505 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem
	I1109 14:13:54.155674  268505 main.go:143] libmachine: Decoding PEM data...
	I1109 14:13:54.155699  268505 main.go:143] libmachine: Parsing certificate...
	I1109 14:13:54.155760  268505 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem
	I1109 14:13:54.155795  268505 main.go:143] libmachine: Decoding PEM data...
	I1109 14:13:54.155812  268505 main.go:143] libmachine: Parsing certificate...
	I1109 14:13:54.156161  268505 cli_runner.go:164] Run: docker network inspect auto-593530 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 14:13:54.174488  268505 cli_runner.go:211] docker network inspect auto-593530 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 14:13:54.174566  268505 network_create.go:284] running [docker network inspect auto-593530] to gather additional debugging logs...
	I1109 14:13:54.174589  268505 cli_runner.go:164] Run: docker network inspect auto-593530
	W1109 14:13:54.195799  268505 cli_runner.go:211] docker network inspect auto-593530 returned with exit code 1
	I1109 14:13:54.195828  268505 network_create.go:287] error running [docker network inspect auto-593530]: docker network inspect auto-593530: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-593530 not found
	I1109 14:13:54.195843  268505 network_create.go:289] output of [docker network inspect auto-593530]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-593530 not found
	
	** /stderr **
	I1109 14:13:54.195956  268505 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:13:54.217441  268505 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c085420cd01a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:dd:1a:ae:de:73} reservation:<nil>}
	I1109 14:13:54.218324  268505 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-227a1511ff5a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4a:00:84:88:a9:17} reservation:<nil>}
	I1109 14:13:54.219234  268505 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9196665a99b4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:6c:f7:d0:28:4f} reservation:<nil>}
	I1109 14:13:54.219922  268505 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-48111e278cbe IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:19:f0:f9:a5:e1} reservation:<nil>}
	I1109 14:13:54.220498  268505 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-1418d8b0aecf IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:96:45:f5:f6:93:a3} reservation:<nil>}
	I1109 14:13:54.221026  268505 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-0e4394163f33 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:2e:88:87:a6:3a:9b} reservation:<nil>}
	I1109 14:13:54.221834  268505 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f36850}
	I1109 14:13:54.221864  268505 network_create.go:124] attempt to create docker network auto-593530 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1109 14:13:54.221920  268505 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-593530 auto-593530
	I1109 14:13:54.294258  268505 network_create.go:108] docker network auto-593530 192.168.103.0/24 created
	I1109 14:13:54.294288  268505 kic.go:121] calculated static IP "192.168.103.2" for the "auto-593530" container
	I1109 14:13:54.294369  268505 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 14:13:54.321506  268505 cli_runner.go:164] Run: docker volume create auto-593530 --label name.minikube.sigs.k8s.io=auto-593530 --label created_by.minikube.sigs.k8s.io=true
	I1109 14:13:54.348919  268505 oci.go:103] Successfully created a docker volume auto-593530
	I1109 14:13:54.348998  268505 cli_runner.go:164] Run: docker run --rm --name auto-593530-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-593530 --entrypoint /usr/bin/test -v auto-593530:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 14:13:54.917980  268505 oci.go:107] Successfully prepared a docker volume auto-593530
	I1109 14:13:54.918061  268505 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:13:54.918073  268505 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 14:13:54.918133  268505 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-593530:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1109 14:13:57.774874  256773 node_ready.go:57] node "default-k8s-diff-port-326524" has "Ready":"False" status (will retry)
	W1109 14:14:00.274465  256773 node_ready.go:57] node "default-k8s-diff-port-326524" has "Ready":"False" status (will retry)
	I1109 14:13:57.321471  264151 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1109 14:13:57.327425  264151 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:13:57.327453  264151 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:13:57.821145  264151 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1109 14:13:57.825780  264151 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1109 14:13:57.826752  264151 api_server.go:141] control plane version: v1.34.1
	I1109 14:13:57.826775  264151 api_server.go:131] duration metric: took 1.0063794s to wait for apiserver health ...
	I1109 14:13:57.826782  264151 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:13:57.842526  264151 system_pods.go:59] 8 kube-system pods found
	I1109 14:13:57.842562  264151 system_pods.go:61] "coredns-66bc5c9577-bbnm4" [b6f42679-62a3-4b25-9119-c08fe6b07c0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:13:57.842573  264151 system_pods.go:61] "etcd-embed-certs-273180" [bbb903eb-c06b-4c1e-948e-3f5db3af34b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:13:57.842589  264151 system_pods.go:61] "kindnet-scgq8" [5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:13:57.842599  264151 system_pods.go:61] "kube-apiserver-embed-certs-273180" [0f55bcc7-9c38-4bb2-96b6-ff2012e9d407] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:13:57.842609  264151 system_pods.go:61] "kube-controller-manager-embed-certs-273180" [c03a3fa3-7a12-47a2-b628-eae16a25cfb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:13:57.842618  264151 system_pods.go:61] "kube-proxy-k6zsl" [aa0ed3ae-34a8-4368-8e1c-385033e46f0e] Running
	I1109 14:13:57.842626  264151 system_pods.go:61] "kube-scheduler-embed-certs-273180" [68f3160d-05e0-491f-8254-44379226803a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:13:57.842633  264151 system_pods.go:61] "storage-provisioner" [d9104f3d-417a-49dc-86ba-af31925458bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:13:57.842661  264151 system_pods.go:74] duration metric: took 15.871804ms to wait for pod list to return data ...
	I1109 14:13:57.842676  264151 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:13:57.844885  264151 default_sa.go:45] found service account: "default"
	I1109 14:13:57.844909  264151 default_sa.go:55] duration metric: took 2.225372ms for default service account to be created ...
	I1109 14:13:57.844919  264151 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:13:57.847685  264151 system_pods.go:86] 8 kube-system pods found
	I1109 14:13:57.847714  264151 system_pods.go:89] "coredns-66bc5c9577-bbnm4" [b6f42679-62a3-4b25-9119-c08fe6b07c0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:13:57.847725  264151 system_pods.go:89] "etcd-embed-certs-273180" [bbb903eb-c06b-4c1e-948e-3f5db3af34b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:13:57.847735  264151 system_pods.go:89] "kindnet-scgq8" [5aaeb813-c07c-47fe-bc0c-2fdd09b0c5ba] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:13:57.847747  264151 system_pods.go:89] "kube-apiserver-embed-certs-273180" [0f55bcc7-9c38-4bb2-96b6-ff2012e9d407] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:13:57.847759  264151 system_pods.go:89] "kube-controller-manager-embed-certs-273180" [c03a3fa3-7a12-47a2-b628-eae16a25cfb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:13:57.847769  264151 system_pods.go:89] "kube-proxy-k6zsl" [aa0ed3ae-34a8-4368-8e1c-385033e46f0e] Running
	I1109 14:13:57.847777  264151 system_pods.go:89] "kube-scheduler-embed-certs-273180" [68f3160d-05e0-491f-8254-44379226803a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:13:57.847788  264151 system_pods.go:89] "storage-provisioner" [d9104f3d-417a-49dc-86ba-af31925458bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:13:57.847800  264151 system_pods.go:126] duration metric: took 2.866794ms to wait for k8s-apps to be running ...
	I1109 14:13:57.847815  264151 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:13:57.847863  264151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:13:57.861263  264151 system_svc.go:56] duration metric: took 13.445618ms WaitForService to wait for kubelet
	I1109 14:13:57.861289  264151 kubeadm.go:587] duration metric: took 3.372659555s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:13:57.861305  264151 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:13:57.989100  264151 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:13:57.989136  264151 node_conditions.go:123] node cpu capacity is 8
	I1109 14:13:57.989152  264151 node_conditions.go:105] duration metric: took 127.842324ms to run NodePressure ...
	I1109 14:13:57.989170  264151 start.go:242] waiting for startup goroutines ...
	I1109 14:13:57.989179  264151 start.go:247] waiting for cluster config update ...
	I1109 14:13:57.989193  264151 start.go:256] writing updated cluster config ...
	I1109 14:13:57.991046  264151 ssh_runner.go:195] Run: rm -f paused
	I1109 14:13:57.995893  264151 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:13:57.999694  264151 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bbnm4" in "kube-system" namespace to be "Ready" or be gone ...
	W1109 14:14:00.005345  264151 pod_ready.go:104] pod "coredns-66bc5c9577-bbnm4" is not "Ready", error: <nil>
	W1109 14:14:02.006631  264151 pod_ready.go:104] pod "coredns-66bc5c9577-bbnm4" is not "Ready", error: <nil>
	I1109 14:13:59.768848  268505 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-593530:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.850665974s)
	I1109 14:13:59.768888  268505 kic.go:203] duration metric: took 4.850810592s to extract preloaded images to volume ...
	W1109 14:13:59.768997  268505 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1109 14:13:59.769040  268505 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1109 14:13:59.769088  268505 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 14:13:59.829078  268505 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-593530 --name auto-593530 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-593530 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-593530 --network auto-593530 --ip 192.168.103.2 --volume auto-593530:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 14:14:00.161725  268505 cli_runner.go:164] Run: docker container inspect auto-593530 --format={{.State.Running}}
	I1109 14:14:00.184387  268505 cli_runner.go:164] Run: docker container inspect auto-593530 --format={{.State.Status}}
	I1109 14:14:00.202854  268505 cli_runner.go:164] Run: docker exec auto-593530 stat /var/lib/dpkg/alternatives/iptables
	I1109 14:14:00.256146  268505 oci.go:144] the created container "auto-593530" has a running status.
	I1109 14:14:00.256178  268505 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/auto-593530/id_rsa...
	I1109 14:14:00.460285  268505 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-5854/.minikube/machines/auto-593530/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 14:14:00.485284  268505 cli_runner.go:164] Run: docker container inspect auto-593530 --format={{.State.Status}}
	I1109 14:14:00.507312  268505 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 14:14:00.507331  268505 kic_runner.go:114] Args: [docker exec --privileged auto-593530 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 14:14:00.550938  268505 cli_runner.go:164] Run: docker container inspect auto-593530 --format={{.State.Status}}
	I1109 14:14:00.567449  268505 machine.go:94] provisionDockerMachine start ...
	I1109 14:14:00.567526  268505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-593530
	I1109 14:14:00.584852  268505 main.go:143] libmachine: Using SSH client type: native
	I1109 14:14:00.585082  268505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33095 <nil> <nil>}
	I1109 14:14:00.585095  268505 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:14:00.585788  268505 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57450->127.0.0.1:33095: read: connection reset by peer
	I1109 14:14:03.730242  268505 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-593530
	
	I1109 14:14:03.730270  268505 ubuntu.go:182] provisioning hostname "auto-593530"
	I1109 14:14:03.730347  268505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-593530
	I1109 14:14:03.755401  268505 main.go:143] libmachine: Using SSH client type: native
	I1109 14:14:03.755723  268505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33095 <nil> <nil>}
	I1109 14:14:03.755744  268505 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-593530 && echo "auto-593530" | sudo tee /etc/hostname
	I1109 14:14:03.917567  268505 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-593530
	
	I1109 14:14:03.917686  268505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-593530
	I1109 14:14:03.940836  268505 main.go:143] libmachine: Using SSH client type: native
	I1109 14:14:03.941123  268505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33095 <nil> <nil>}
	I1109 14:14:03.941150  268505 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-593530' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-593530/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-593530' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:14:06.394573  262672 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 14:14:06.394674  262672 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:14:06.394768  262672 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 14:14:06.394819  262672 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1109 14:14:06.394850  262672 kubeadm.go:319] OS: Linux
	I1109 14:14:06.394918  262672 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 14:14:06.394996  262672 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 14:14:06.395079  262672 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 14:14:06.395138  262672 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 14:14:06.395207  262672 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 14:14:06.395265  262672 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 14:14:06.395311  262672 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 14:14:06.395372  262672 kubeadm.go:319] CGROUPS_IO: enabled
	I1109 14:14:06.395463  262672 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:14:06.395603  262672 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:14:06.395763  262672 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 14:14:06.395850  262672 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:14:06.397342  262672 out.go:252]   - Generating certificates and keys ...
	I1109 14:14:06.397449  262672 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:14:06.397545  262672 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:14:06.397602  262672 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 14:14:06.397663  262672 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 14:14:06.397756  262672 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 14:14:06.397852  262672 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 14:14:06.397902  262672 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 14:14:06.398012  262672 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-331530] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1109 14:14:06.398100  262672 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 14:14:06.398261  262672 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-331530] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1109 14:14:06.398369  262672 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 14:14:06.398475  262672 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 14:14:06.398558  262672 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 14:14:06.398721  262672 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:14:06.398796  262672 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:14:06.398893  262672 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 14:14:06.398955  262672 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:14:06.399019  262672 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:14:06.399092  262672 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:14:06.399157  262672 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:14:06.399229  262672 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:14:06.400572  262672 out.go:252]   - Booting up control plane ...
	I1109 14:14:06.400717  262672 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:14:06.400827  262672 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:14:06.400919  262672 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:14:06.401049  262672 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:14:06.401203  262672 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 14:14:06.401365  262672 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 14:14:06.401491  262672 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:14:06.401554  262672 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:14:06.401768  262672 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 14:14:06.401904  262672 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 14:14:06.401990  262672 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.598305ms
	I1109 14:14:06.402123  262672 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 14:14:06.402245  262672 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1109 14:14:06.402381  262672 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 14:14:06.402488  262672 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 14:14:06.402594  262672 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.082287258s
	I1109 14:14:06.402734  262672 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.500027394s
	I1109 14:14:06.402843  262672 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001858726s
	I1109 14:14:06.402998  262672 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 14:14:06.403149  262672 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 14:14:06.403245  262672 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 14:14:06.403504  262672 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-331530 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 14:14:06.403595  262672 kubeadm.go:319] [bootstrap-token] Using token: 7wmxla.bmsbmmnwrd0o4euu
	I1109 14:14:06.404767  262672 out.go:252]   - Configuring RBAC rules ...
	I1109 14:14:06.404862  262672 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 14:14:06.404929  262672 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 14:14:06.405056  262672 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 14:14:06.405220  262672 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 14:14:06.405386  262672 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 14:14:06.405506  262672 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 14:14:06.405666  262672 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 14:14:06.405743  262672 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 14:14:06.405823  262672 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 14:14:06.405835  262672 kubeadm.go:319] 
	I1109 14:14:06.405922  262672 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 14:14:06.405930  262672 kubeadm.go:319] 
	I1109 14:14:06.406025  262672 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 14:14:06.406033  262672 kubeadm.go:319] 
	I1109 14:14:06.406069  262672 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 14:14:06.406154  262672 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 14:14:06.406229  262672 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 14:14:06.406250  262672 kubeadm.go:319] 
	I1109 14:14:06.406295  262672 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 14:14:06.406301  262672 kubeadm.go:319] 
	I1109 14:14:06.406364  262672 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 14:14:06.406378  262672 kubeadm.go:319] 
	I1109 14:14:06.406440  262672 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 14:14:06.406538  262672 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 14:14:06.406667  262672 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 14:14:06.406682  262672 kubeadm.go:319] 
	I1109 14:14:06.406806  262672 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 14:14:06.406936  262672 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 14:14:06.406952  262672 kubeadm.go:319] 
	I1109 14:14:06.407056  262672 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 7wmxla.bmsbmmnwrd0o4euu \
	I1109 14:14:06.407173  262672 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f \
	I1109 14:14:06.407200  262672 kubeadm.go:319] 	--control-plane 
	I1109 14:14:06.407206  262672 kubeadm.go:319] 
	I1109 14:14:06.407318  262672 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 14:14:06.407340  262672 kubeadm.go:319] 
	I1109 14:14:06.407453  262672 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7wmxla.bmsbmmnwrd0o4euu \
	I1109 14:14:06.407590  262672 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f 
	I1109 14:14:06.407603  262672 cni.go:84] Creating CNI manager for ""
	I1109 14:14:06.407612  262672 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:14:06.408814  262672 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1109 14:14:06.409822  262672 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 14:14:06.415058  262672 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 14:14:06.415076  262672 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1109 14:14:06.437217  262672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 14:14:04.074351  268505 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:14:04.074391  268505 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-5854/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-5854/.minikube}
	I1109 14:14:04.074440  268505 ubuntu.go:190] setting up certificates
	I1109 14:14:04.074453  268505 provision.go:84] configureAuth start
	I1109 14:14:04.074532  268505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-593530
	I1109 14:14:04.095725  268505 provision.go:143] copyHostCerts
	I1109 14:14:04.095794  268505 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem, removing ...
	I1109 14:14:04.095812  268505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem
	I1109 14:14:04.095880  268505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem (1123 bytes)
	I1109 14:14:04.095981  268505 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem, removing ...
	I1109 14:14:04.095992  268505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem
	I1109 14:14:04.096035  268505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem (1679 bytes)
	I1109 14:14:04.096111  268505 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem, removing ...
	I1109 14:14:04.096122  268505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem
	I1109 14:14:04.096160  268505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem (1078 bytes)
	I1109 14:14:04.096230  268505 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem org=jenkins.auto-593530 san=[127.0.0.1 192.168.103.2 auto-593530 localhost minikube]
	I1109 14:14:04.348358  268505 provision.go:177] copyRemoteCerts
	I1109 14:14:04.348438  268505 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:14:04.348492  268505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-593530
	I1109 14:14:04.374961  268505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/auto-593530/id_rsa Username:docker}
	I1109 14:14:04.485174  268505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 14:14:04.512486  268505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1109 14:14:04.536939  268505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:14:04.560402  268505 provision.go:87] duration metric: took 485.934725ms to configureAuth
	I1109 14:14:04.560430  268505 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:14:04.560622  268505 config.go:182] Loaded profile config "auto-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:14:04.560755  268505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-593530
	I1109 14:14:04.584291  268505 main.go:143] libmachine: Using SSH client type: native
	I1109 14:14:04.584546  268505 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33095 <nil> <nil>}
	I1109 14:14:04.584570  268505 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:14:04.883605  268505 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:14:04.883633  268505 machine.go:97] duration metric: took 4.31616378s to provisionDockerMachine
	I1109 14:14:04.883675  268505 client.go:176] duration metric: took 10.728132513s to LocalClient.Create
	I1109 14:14:04.883699  268505 start.go:167] duration metric: took 10.728193278s to libmachine.API.Create "auto-593530"
	I1109 14:14:04.883716  268505 start.go:293] postStartSetup for "auto-593530" (driver="docker")
	I1109 14:14:04.883730  268505 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:14:04.883795  268505 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:14:04.883832  268505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-593530
	I1109 14:14:04.909393  268505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/auto-593530/id_rsa Username:docker}
	I1109 14:14:05.016128  268505 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:14:05.020681  268505 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:14:05.020715  268505 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:14:05.020727  268505 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/addons for local assets ...
	I1109 14:14:05.020780  268505 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/files for local assets ...
	I1109 14:14:05.020885  268505 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem -> 93652.pem in /etc/ssl/certs
	I1109 14:14:05.021018  268505 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:14:05.031931  268505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:14:05.058293  268505 start.go:296] duration metric: took 174.562792ms for postStartSetup
	I1109 14:14:05.058701  268505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-593530
	I1109 14:14:05.079899  268505 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/config.json ...
	I1109 14:14:05.080202  268505 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:14:05.080251  268505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-593530
	I1109 14:14:05.101496  268505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/auto-593530/id_rsa Username:docker}
	I1109 14:14:05.198689  268505 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:14:05.204337  268505 start.go:128] duration metric: took 11.050505465s to createHost
	I1109 14:14:05.204361  268505 start.go:83] releasing machines lock for "auto-593530", held for 11.05064258s
	I1109 14:14:05.204424  268505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-593530
	I1109 14:14:05.228150  268505 ssh_runner.go:195] Run: cat /version.json
	I1109 14:14:05.228210  268505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-593530
	I1109 14:14:05.228213  268505 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:14:05.228281  268505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-593530
	I1109 14:14:05.252313  268505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/auto-593530/id_rsa Username:docker}
	I1109 14:14:05.252674  268505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/auto-593530/id_rsa Username:docker}
	I1109 14:14:05.415860  268505 ssh_runner.go:195] Run: systemctl --version
	I1109 14:14:05.423677  268505 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:14:05.471371  268505 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:14:05.477093  268505 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:14:05.477160  268505 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:14:05.509996  268505 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1109 14:14:05.510022  268505 start.go:496] detecting cgroup driver to use...
	I1109 14:14:05.510053  268505 detect.go:190] detected "systemd" cgroup driver on host os
	I1109 14:14:05.510100  268505 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:14:05.531178  268505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:14:05.546465  268505 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:14:05.546538  268505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:14:05.567016  268505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:14:05.590303  268505 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:14:05.714150  268505 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:14:05.837290  268505 docker.go:234] disabling docker service ...
	I1109 14:14:05.837361  268505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:14:05.864394  268505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:14:05.882267  268505 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:14:06.001478  268505 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:14:06.108369  268505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:14:06.121344  268505 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:14:06.135826  268505 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:14:06.135897  268505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:06.167342  268505 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1109 14:14:06.167412  268505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:06.182217  268505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:06.248941  268505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:06.270139  268505 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:14:06.278965  268505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:06.325101  268505 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:06.369757  268505 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:06.378796  268505 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:14:06.386876  268505 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:14:06.395734  268505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:14:06.496217  268505 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:14:07.012877  268505 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:14:07.012950  268505 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:14:07.017599  268505 start.go:564] Will wait 60s for crictl version
	I1109 14:14:07.017676  268505 ssh_runner.go:195] Run: which crictl
	I1109 14:14:07.021913  268505 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:14:07.046803  268505 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:14:07.046893  268505 ssh_runner.go:195] Run: crio --version
	I1109 14:14:07.078893  268505 ssh_runner.go:195] Run: crio --version
	I1109 14:14:07.107122  268505 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1109 14:14:02.275457  256773 node_ready.go:57] node "default-k8s-diff-port-326524" has "Ready":"False" status (will retry)
	W1109 14:14:04.775995  256773 node_ready.go:57] node "default-k8s-diff-port-326524" has "Ready":"False" status (will retry)
	W1109 14:14:04.509447  264151 pod_ready.go:104] pod "coredns-66bc5c9577-bbnm4" is not "Ready", error: <nil>
	W1109 14:14:07.005533  264151 pod_ready.go:104] pod "coredns-66bc5c9577-bbnm4" is not "Ready", error: <nil>
	I1109 14:14:07.108116  268505 cli_runner.go:164] Run: docker network inspect auto-593530 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:14:07.126297  268505 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1109 14:14:07.130074  268505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:14:07.140304  268505 kubeadm.go:884] updating cluster {Name:auto-593530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-593530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:14:07.140424  268505 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:14:07.140482  268505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:14:07.172098  268505 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:14:07.172113  268505 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:14:07.172149  268505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:14:07.197170  268505 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:14:07.197185  268505 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:14:07.197192  268505 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1109 14:14:07.197265  268505 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-593530 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-593530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:14:07.197319  268505 ssh_runner.go:195] Run: crio config
	I1109 14:14:07.242616  268505 cni.go:84] Creating CNI manager for ""
	I1109 14:14:07.242655  268505 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:14:07.242675  268505 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:14:07.242697  268505 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-593530 NodeName:auto-593530 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:14:07.242826  268505 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-593530"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:14:07.242886  268505 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:14:07.250974  268505 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:14:07.251040  268505 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:14:07.258630  268505 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1109 14:14:07.271041  268505 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:14:07.285674  268505 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1109 14:14:07.297396  268505 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:14:07.300701  268505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:14:07.310388  268505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:14:07.394884  268505 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:14:07.420007  268505 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530 for IP: 192.168.103.2
	I1109 14:14:07.420029  268505 certs.go:195] generating shared ca certs ...
	I1109 14:14:07.420048  268505 certs.go:227] acquiring lock for ca certs: {Name:mk1fbc7fb5aaf0e87090d75145f91c095ef07289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:07.420195  268505 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key
	I1109 14:14:07.420234  268505 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key
	I1109 14:14:07.420243  268505 certs.go:257] generating profile certs ...
	I1109 14:14:07.420291  268505 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/client.key
	I1109 14:14:07.420302  268505 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/client.crt with IP's: []
	I1109 14:14:07.591408  268505 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/client.crt ...
	I1109 14:14:07.591438  268505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/client.crt: {Name:mkaf699c5e57cadd7c92cd0d0ea641b0f0940622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:07.591601  268505 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/client.key ...
	I1109 14:14:07.591619  268505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/client.key: {Name:mkfc701f8e5f17c72ed33f701860ae9105be1361 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:07.591747  268505 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/apiserver.key.5ac86f73
	I1109 14:14:07.591769  268505 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/apiserver.crt.5ac86f73 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1109 14:14:07.984463  268505 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/apiserver.crt.5ac86f73 ...
	I1109 14:14:07.984488  268505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/apiserver.crt.5ac86f73: {Name:mkeda8467bf6e4ed6155d34a171b10b86d2ad5be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:07.984625  268505 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/apiserver.key.5ac86f73 ...
	I1109 14:14:07.984646  268505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/apiserver.key.5ac86f73: {Name:mk14d404db79ea350790aa0ce86664b3fcb93fe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:07.984718  268505 certs.go:382] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/apiserver.crt.5ac86f73 -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/apiserver.crt
	I1109 14:14:07.984818  268505 certs.go:386] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/apiserver.key.5ac86f73 -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/apiserver.key
	I1109 14:14:07.984884  268505 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/proxy-client.key
	I1109 14:14:07.984898  268505 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/proxy-client.crt with IP's: []
	I1109 14:14:08.166469  268505 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/proxy-client.crt ...
	I1109 14:14:08.166497  268505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/proxy-client.crt: {Name:mk7f9927cb041caaf1800e908b7aad95def6f18d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:08.166679  268505 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/proxy-client.key ...
	I1109 14:14:08.166694  268505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/proxy-client.key: {Name:mk138d001d95206a4644e69f19064f5a43bf20ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:08.166925  268505 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem (1338 bytes)
	W1109 14:14:08.166971  268505 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365_empty.pem, impossibly tiny 0 bytes
	I1109 14:14:08.166983  268505 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:14:08.167015  268505 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 14:14:08.167046  268505 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:14:08.167074  268505 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 14:14:08.167128  268505 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:14:08.167743  268505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:14:08.185763  268505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:14:08.202757  268505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:14:08.220192  268505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:14:08.238400  268505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1109 14:14:08.256524  268505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:14:08.273059  268505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:14:08.289920  268505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/auto-593530/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:14:08.305704  268505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:14:08.323561  268505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem --> /usr/share/ca-certificates/9365.pem (1338 bytes)
	I1109 14:14:08.340081  268505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /usr/share/ca-certificates/93652.pem (1708 bytes)
	I1109 14:14:08.356092  268505 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:14:08.368058  268505 ssh_runner.go:195] Run: openssl version
	I1109 14:14:08.373694  268505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:14:08.381179  268505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:14:08.384489  268505 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:14:08.384535  268505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:14:08.420103  268505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:14:08.427924  268505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9365.pem && ln -fs /usr/share/ca-certificates/9365.pem /etc/ssl/certs/9365.pem"
	I1109 14:14:08.435656  268505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9365.pem
	I1109 14:14:08.439017  268505 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:34 /usr/share/ca-certificates/9365.pem
	I1109 14:14:08.439073  268505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9365.pem
	I1109 14:14:08.475717  268505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9365.pem /etc/ssl/certs/51391683.0"
	I1109 14:14:08.483422  268505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93652.pem && ln -fs /usr/share/ca-certificates/93652.pem /etc/ssl/certs/93652.pem"
	I1109 14:14:08.491299  268505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93652.pem
	I1109 14:14:08.494587  268505 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:34 /usr/share/ca-certificates/93652.pem
	I1109 14:14:08.494624  268505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93652.pem
	I1109 14:14:08.531773  268505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93652.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:14:08.540267  268505 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:14:08.543938  268505 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:14:08.543994  268505 kubeadm.go:401] StartCluster: {Name:auto-593530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-593530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:14:08.544055  268505 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:14:08.544106  268505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:14:08.573858  268505 cri.go:89] found id: ""
	I1109 14:14:08.573918  268505 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:14:08.581733  268505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:14:08.589154  268505 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:14:08.589200  268505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:14:08.596712  268505 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:14:08.596735  268505 kubeadm.go:158] found existing configuration files:
	
	I1109 14:14:08.596771  268505 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 14:14:08.603965  268505 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:14:08.604015  268505 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:14:08.610964  268505 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 14:14:08.618000  268505 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:14:08.618046  268505 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:14:08.624839  268505 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 14:14:08.631917  268505 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:14:08.631952  268505 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:14:08.638720  268505 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 14:14:08.646023  268505 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:14:08.646056  268505 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:14:08.652693  268505 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:14:08.708593  268505 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1109 14:14:08.766863  268505 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 14:14:06.909814  262672 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:14:06.909900  262672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:14:06.909909  262672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-331530 minikube.k8s.io/updated_at=2025_11_09T14_14_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=newest-cni-331530 minikube.k8s.io/primary=true
	I1109 14:14:06.925185  262672 ops.go:34] apiserver oom_adj: -16
	I1109 14:14:07.009668  262672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:14:07.509761  262672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:14:08.010224  262672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:14:08.509760  262672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:14:09.010490  262672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:14:09.509757  262672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:14:10.010699  262672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:14:10.509761  262672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:14:11.010356  262672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:14:11.510523  262672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:14:11.575079  262672 kubeadm.go:1114] duration metric: took 4.665272393s to wait for elevateKubeSystemPrivileges
	I1109 14:14:11.575129  262672 kubeadm.go:403] duration metric: took 15.740394143s to StartCluster
	I1109 14:14:11.575152  262672 settings.go:142] acquiring lock: {Name:mk4e77b290eba64589404bd2a5f48c72505ab262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:11.575218  262672 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:14:11.576776  262672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:11.577053  262672 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:14:11.577102  262672 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:14:11.577082  262672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 14:14:11.577198  262672 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-331530"
	I1109 14:14:11.577221  262672 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-331530"
	I1109 14:14:11.577253  262672 host.go:66] Checking if "newest-cni-331530" exists ...
	I1109 14:14:11.577273  262672 config.go:182] Loaded profile config "newest-cni-331530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:14:11.577277  262672 addons.go:70] Setting default-storageclass=true in profile "newest-cni-331530"
	I1109 14:14:11.577311  262672 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-331530"
	I1109 14:14:11.577659  262672 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:11.577790  262672 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:11.579280  262672 out.go:179] * Verifying Kubernetes components...
	I1109 14:14:11.580871  262672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:14:11.601635  262672 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:14:11.601760  262672 addons.go:239] Setting addon default-storageclass=true in "newest-cni-331530"
	I1109 14:14:11.601802  262672 host.go:66] Checking if "newest-cni-331530" exists ...
	I1109 14:14:11.602271  262672 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:11.606134  262672 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:14:11.606155  262672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:14:11.606201  262672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:11.631118  262672 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:14:11.631142  262672 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:14:11.631196  262672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:11.634387  262672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:11.655727  262672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:11.666606  262672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 14:14:11.730061  262672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:14:11.743500  262672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:14:11.770798  262672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:14:11.861330  262672 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1109 14:14:11.862726  262672 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:14:11.862776  262672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:14:12.080224  262672 api_server.go:72] duration metric: took 503.139309ms to wait for apiserver process to appear ...
	I1109 14:14:12.080246  262672 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:14:12.080262  262672 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:14:12.084995  262672 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1109 14:14:12.085902  262672 api_server.go:141] control plane version: v1.34.1
	I1109 14:14:12.085927  262672 api_server.go:131] duration metric: took 5.675792ms to wait for apiserver health ...
	I1109 14:14:12.085935  262672 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:14:12.087331  262672 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1109 14:14:07.274694  256773 node_ready.go:57] node "default-k8s-diff-port-326524" has "Ready":"False" status (will retry)
	W1109 14:14:09.774034  256773 node_ready.go:57] node "default-k8s-diff-port-326524" has "Ready":"False" status (will retry)
	W1109 14:14:11.775156  256773 node_ready.go:57] node "default-k8s-diff-port-326524" has "Ready":"False" status (will retry)
	W1109 14:14:09.005729  264151 pod_ready.go:104] pod "coredns-66bc5c9577-bbnm4" is not "Ready", error: <nil>
	W1109 14:14:11.505525  264151 pod_ready.go:104] pod "coredns-66bc5c9577-bbnm4" is not "Ready", error: <nil>
	I1109 14:14:12.088388  262672 system_pods.go:59] 8 kube-system pods found
	I1109 14:14:12.088427  262672 system_pods.go:61] "coredns-66bc5c9577-xvlhm" [ab5d6559-9c58-477e-bae9-e4cedcc2832e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1109 14:14:12.088440  262672 system_pods.go:61] "etcd-newest-cni-331530" [3508f193-5b63-49b0-bbc3-f94d167d8b0c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:14:12.088459  262672 system_pods.go:61] "kindnet-rmtgg" [59572d13-2d29-4a86-bf1d-e75d0dd0d43c] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:14:12.088472  262672 system_pods.go:61] "kube-apiserver-newest-cni-331530" [d47aa681-ce72-491a-847d-050b27ac3607] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:14:12.088475  262672 addons.go:515] duration metric: took 511.376204ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1109 14:14:12.088484  262672 system_pods.go:61] "kube-controller-manager-newest-cni-331530" [ad70a3ff-3aba-485c-a18a-79b65fb30455] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:14:12.088498  262672 system_pods.go:61] "kube-proxy-fkl5q" [faf18639-aeb9-4b17-bb1d-32e85cf54dce] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1109 14:14:12.088508  262672 system_pods.go:61] "kube-scheduler-newest-cni-331530" [e5a8d839-20ee-4400-81dd-abcc742b5c2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:14:12.088518  262672 system_pods.go:61] "storage-provisioner" [77fd8da7-bb4b-4c95-beb6-7d28e7eaabbb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1109 14:14:12.088531  262672 system_pods.go:74] duration metric: took 2.588431ms to wait for pod list to return data ...
	I1109 14:14:12.088543  262672 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:14:12.090313  262672 default_sa.go:45] found service account: "default"
	I1109 14:14:12.090333  262672 default_sa.go:55] duration metric: took 1.780048ms for default service account to be created ...
	I1109 14:14:12.090345  262672 kubeadm.go:587] duration metric: took 513.26351ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1109 14:14:12.090378  262672 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:14:12.092156  262672 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:14:12.092177  262672 node_conditions.go:123] node cpu capacity is 8
	I1109 14:14:12.092190  262672 node_conditions.go:105] duration metric: took 1.807425ms to run NodePressure ...
	I1109 14:14:12.092204  262672 start.go:242] waiting for startup goroutines ...
	I1109 14:14:12.365527  262672 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-331530" context rescaled to 1 replicas
	I1109 14:14:12.365569  262672 start.go:247] waiting for cluster config update ...
	I1109 14:14:12.365583  262672 start.go:256] writing updated cluster config ...
	I1109 14:14:12.365884  262672 ssh_runner.go:195] Run: rm -f paused
	I1109 14:14:12.413679  262672 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:14:12.416465  262672 out.go:179] * Done! kubectl is now configured to use "newest-cni-331530" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.690101305Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.690326553Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-fkl5q/POD" id=4489f53c-1fff-4023-8d6c-02ec156d7571 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.690452562Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.691266996Z" level=info msg="Ran pod sandbox 2c665e169f30ae790e6f646b12ac795567b0fcf3fc28653eeaee926958276b1a with infra container: kube-system/kindnet-rmtgg/POD" id=168007da-f6a7-47d9-87da-9834c7ef2b88 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.693293165Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=4489f53c-1fff-4023-8d6c-02ec156d7571 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.694024105Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=cc47abe0-f7f8-4b72-be2f-956ecb0bced2 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.695578662Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.697355417Z" level=info msg="Ran pod sandbox 2bbe9462118f9cea0feaa63ff628d5a19d1fb40403af49a829c1bec916680e47 with infra container: kube-system/kube-proxy-fkl5q/POD" id=4489f53c-1fff-4023-8d6c-02ec156d7571 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.697695447Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=72aa730a-2fbe-43a8-9c8a-c87b5841814c name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.699291641Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=eb4d1f73-3a1e-471a-91e0-8521023d6c8b name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.701006347Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=64a28e5a-78ac-4db4-bf96-8042beedeed1 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.702281358Z" level=info msg="Creating container: kube-system/kindnet-rmtgg/kindnet-cni" id=4ff6ccef-1d94-4312-8b73-060e73f10944 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.702376238Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.70435166Z" level=info msg="Creating container: kube-system/kube-proxy-fkl5q/kube-proxy" id=b8b2b1be-66cd-4e9e-ab24-e1aa8ef68860 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.704610314Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.70729809Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.707942792Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.712208693Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.712784765Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.774342979Z" level=info msg="Created container b1e44e7eca3f69973b698078e39783033fd895ff1a43351878f15f609199e487: kube-system/kindnet-rmtgg/kindnet-cni" id=4ff6ccef-1d94-4312-8b73-060e73f10944 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.775742729Z" level=info msg="Starting container: b1e44e7eca3f69973b698078e39783033fd895ff1a43351878f15f609199e487" id=08cce6ac-49b8-43bc-83eb-a8268f906963 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.778783265Z" level=info msg="Created container 3960ff64ee782090cbd6fde11f3617517193fd85d9c80631a4237dfcdf05ba50: kube-system/kube-proxy-fkl5q/kube-proxy" id=b8b2b1be-66cd-4e9e-ab24-e1aa8ef68860 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.778883641Z" level=info msg="Started container" PID=1599 containerID=b1e44e7eca3f69973b698078e39783033fd895ff1a43351878f15f609199e487 description=kube-system/kindnet-rmtgg/kindnet-cni id=08cce6ac-49b8-43bc-83eb-a8268f906963 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2c665e169f30ae790e6f646b12ac795567b0fcf3fc28653eeaee926958276b1a
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.780783519Z" level=info msg="Starting container: 3960ff64ee782090cbd6fde11f3617517193fd85d9c80631a4237dfcdf05ba50" id=a972c697-dbe6-4565-b88f-096c9c2b8d23 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:14:11 newest-cni-331530 crio[784]: time="2025-11-09T14:14:11.784404025Z" level=info msg="Started container" PID=1602 containerID=3960ff64ee782090cbd6fde11f3617517193fd85d9c80631a4237dfcdf05ba50 description=kube-system/kube-proxy-fkl5q/kube-proxy id=a972c697-dbe6-4565-b88f-096c9c2b8d23 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2bbe9462118f9cea0feaa63ff628d5a19d1fb40403af49a829c1bec916680e47
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3960ff64ee782       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   2bbe9462118f9       kube-proxy-fkl5q                            kube-system
	b1e44e7eca3f6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   2c665e169f30a       kindnet-rmtgg                               kube-system
	c03470c76976f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   b4f5d31a5fa2d       kube-apiserver-newest-cni-331530            kube-system
	976625b5bcdc1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   e43d2b906a66c       etcd-newest-cni-331530                      kube-system
	03cbc4e449396       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   0356c613b12eb       kube-scheduler-newest-cni-331530            kube-system
	84f25cf89506a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   e7d060e0ac35d       kube-controller-manager-newest-cni-331530   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-331530
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-331530
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=newest-cni-331530
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_14_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:14:03 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-331530
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:14:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:14:05 +0000   Sun, 09 Nov 2025 14:14:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:14:05 +0000   Sun, 09 Nov 2025 14:14:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:14:05 +0000   Sun, 09 Nov 2025 14:14:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 09 Nov 2025 14:14:05 +0000   Sun, 09 Nov 2025 14:14:01 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-331530
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                7aba5339-7922-4c58-b653-e5c31d75079c
	  Boot ID:                    f61b57f8-a461-4dd6-b921-37a11bd88f0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-331530                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-rmtgg                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-331530             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-331530    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-fkl5q                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-331530             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 1s                 kube-proxy       
	  Normal  Starting                 13s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node newest-cni-331530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node newest-cni-331530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x8 over 13s)  kubelet          Node newest-cni-331530 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-331530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-331530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s                 kubelet          Node newest-cni-331530 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-331530 event: Registered Node newest-cni-331530 in Controller
	
	
	==> dmesg <==
	[  +0.090313] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.571631] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 9 13:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.019189] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023897] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +2.047759] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +4.031604] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +8.447123] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[ +16.382306] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 13:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	
	
	==> etcd [976625b5bcdc10395ead1a3473aa9078f2f04ebbd83965fa46839ecc4ff08154] <==
	{"level":"warn","ts":"2025-11-09T14:14:02.456130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:02.463709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:02.472345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:02.480295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:02.488812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:02.495749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:02.503707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:02.514001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:02.523151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:02.531937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:02.540282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:02.547758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:02.556537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:02.564701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:02.572282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:02.579769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:02.600111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:02.607873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:02.615590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:02.674988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34786","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T14:14:06.323081Z","caller":"traceutil/trace.go:172","msg":"trace[1892098775] transaction","detail":"{read_only:false; response_revision:297; number_of_response:1; }","duration":"135.706258ms","start":"2025-11-09T14:14:06.187360Z","end":"2025-11-09T14:14:06.323067Z","steps":["trace[1892098775] 'process raft request'  (duration: 81.712458ms)","trace[1892098775] 'compare'  (duration: 53.896447ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:14:06.323111Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.555998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" limit:1 ","response":"range_response_count:1 size:205"}
	{"level":"info","ts":"2025-11-09T14:14:06.323177Z","caller":"traceutil/trace.go:172","msg":"trace[337510732] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/namespace-controller; range_end:; response_count:1; response_revision:296; }","duration":"113.656504ms","start":"2025-11-09T14:14:06.209508Z","end":"2025-11-09T14:14:06.323164Z","steps":["trace[337510732] 'agreement among raft nodes before linearized reading'  (duration: 59.550044ms)","trace[337510732] 'range keys from in-memory index tree'  (duration: 53.916794ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:14:06.828423Z","caller":"traceutil/trace.go:172","msg":"trace[1400244883] transaction","detail":"{read_only:false; number_of_response:0; response_revision:301; }","duration":"124.413455ms","start":"2025-11-09T14:14:06.703991Z","end":"2025-11-09T14:14:06.828405Z","steps":["trace[1400244883] 'process raft request'  (duration: 124.24596ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:14:06.828496Z","caller":"traceutil/trace.go:172","msg":"trace[534600041] transaction","detail":"{read_only:false; response_revision:302; number_of_response:1; }","duration":"106.246427ms","start":"2025-11-09T14:14:06.722229Z","end":"2025-11-09T14:14:06.828476Z","steps":["trace[534600041] 'process raft request'  (duration: 106.059675ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:14:13 up 56 min,  0 user,  load average: 5.78, 3.58, 2.15
	Linux newest-cni-331530 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b1e44e7eca3f69973b698078e39783033fd895ff1a43351878f15f609199e487] <==
	I1109 14:14:11.974196       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:14:12.018009       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1109 14:14:12.018155       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:14:12.018182       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:14:12.018196       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:14:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:14:12.220563       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:14:12.220597       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:14:12.220612       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:14:12.221043       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1109 14:14:12.568470       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:14:12.568496       1 metrics.go:72] Registering metrics
	I1109 14:14:12.568561       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [c03470c76976ff64baada1c08dda641fe9d4ab32f7eb7d7f1fe250a3bc247b1c] <==
	I1109 14:14:03.293596       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:14:03.295514       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 14:14:03.301583       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:14:03.306845       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1109 14:14:03.306884       1 aggregator.go:171] initial CRD sync complete...
	I1109 14:14:03.306894       1 autoregister_controller.go:144] Starting autoregister controller
	I1109 14:14:03.306901       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:14:03.306907       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:14:04.197870       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1109 14:14:04.201384       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1109 14:14:04.201400       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:14:04.757227       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:14:04.799502       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:14:04.894575       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1109 14:14:04.905304       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1109 14:14:04.906514       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:14:04.911907       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:14:05.214151       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:14:05.920339       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:14:06.033351       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1109 14:14:06.054803       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 14:14:10.216305       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:14:11.364637       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1109 14:14:11.415688       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:14:11.418743       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [84f25cf89506ad4fedf274389f5d92686298abdf719f0783945bc3333e854fef] <==
	I1109 14:14:10.211966       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1109 14:14:10.212024       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:14:10.212979       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 14:14:10.212990       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 14:14:10.213033       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1109 14:14:10.213119       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 14:14:10.213134       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1109 14:14:10.213229       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1109 14:14:10.213559       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 14:14:10.215016       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 14:14:10.217189       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1109 14:14:10.217230       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1109 14:14:10.217293       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1109 14:14:10.217323       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1109 14:14:10.217330       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1109 14:14:10.217335       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1109 14:14:10.219183       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:14:10.221360       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:14:10.221373       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:14:10.221378       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 14:14:10.223398       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1109 14:14:10.223411       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1109 14:14:10.223840       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-331530" podCIDRs=["10.42.0.0/24"]
	I1109 14:14:10.230131       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:14:10.230462       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [3960ff64ee782090cbd6fde11f3617517193fd85d9c80631a4237dfcdf05ba50] <==
	I1109 14:14:11.829794       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:14:11.897730       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:14:11.998797       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:14:11.998896       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1109 14:14:11.998995       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:14:12.023761       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:14:12.023820       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:14:12.029251       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:14:12.029754       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:14:12.029849       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:14:12.031327       1 config.go:200] "Starting service config controller"
	I1109 14:14:12.031354       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:14:12.031386       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:14:12.031393       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:14:12.031409       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:14:12.031400       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:14:12.031451       1 config.go:309] "Starting node config controller"
	I1109 14:14:12.031463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:14:12.131524       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1109 14:14:12.131546       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:14:12.131558       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:14:12.131652       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [03cbc4e4493968dd0f4dc963325157c8ab0f2192dab8e64a40cef728b55fb466] <==
	E1109 14:14:03.478454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 14:14:03.478566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 14:14:03.478752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 14:14:03.478768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 14:14:03.479025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 14:14:03.479161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 14:14:03.479174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 14:14:03.479041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 14:14:03.479233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 14:14:03.479264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 14:14:03.479490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 14:14:03.479635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 14:14:03.479761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 14:14:03.479892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 14:14:03.479892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 14:14:03.479958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 14:14:03.480026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 14:14:04.292295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 14:14:04.342243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 14:14:04.363467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 14:14:04.407415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 14:14:04.499437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 14:14:04.521720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 14:14:04.560312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1109 14:14:06.175996       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:14:06 newest-cni-331530 kubelet[1328]: I1109 14:14:06.652713    1328 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 09 14:14:06 newest-cni-331530 kubelet[1328]: I1109 14:14:06.701171    1328 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-331530"
	Nov 09 14:14:06 newest-cni-331530 kubelet[1328]: I1109 14:14:06.701406    1328 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-331530"
	Nov 09 14:14:06 newest-cni-331530 kubelet[1328]: I1109 14:14:06.701491    1328 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-331530"
	Nov 09 14:14:06 newest-cni-331530 kubelet[1328]: I1109 14:14:06.701599    1328 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-331530"
	Nov 09 14:14:06 newest-cni-331530 kubelet[1328]: E1109 14:14:06.830222    1328 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-331530\" already exists" pod="kube-system/kube-controller-manager-newest-cni-331530"
	Nov 09 14:14:06 newest-cni-331530 kubelet[1328]: E1109 14:14:06.830374    1328 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-331530\" already exists" pod="kube-system/kube-apiserver-newest-cni-331530"
	Nov 09 14:14:06 newest-cni-331530 kubelet[1328]: E1109 14:14:06.830542    1328 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-331530\" already exists" pod="kube-system/kube-scheduler-newest-cni-331530"
	Nov 09 14:14:06 newest-cni-331530 kubelet[1328]: E1109 14:14:06.831932    1328 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-331530\" already exists" pod="kube-system/etcd-newest-cni-331530"
	Nov 09 14:14:06 newest-cni-331530 kubelet[1328]: I1109 14:14:06.861974    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-331530" podStartSLOduration=1.861954554 podStartE2EDuration="1.861954554s" podCreationTimestamp="2025-11-09 14:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:14:06.831229825 +0000 UTC m=+1.264832759" watchObservedRunningTime="2025-11-09 14:14:06.861954554 +0000 UTC m=+1.295557488"
	Nov 09 14:14:06 newest-cni-331530 kubelet[1328]: I1109 14:14:06.885689    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-331530" podStartSLOduration=1.885664053 podStartE2EDuration="1.885664053s" podCreationTimestamp="2025-11-09 14:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:14:06.86224134 +0000 UTC m=+1.295844273" watchObservedRunningTime="2025-11-09 14:14:06.885664053 +0000 UTC m=+1.319266986"
	Nov 09 14:14:06 newest-cni-331530 kubelet[1328]: I1109 14:14:06.899840    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-331530" podStartSLOduration=1.8998198880000001 podStartE2EDuration="1.899819888s" podCreationTimestamp="2025-11-09 14:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:14:06.885892723 +0000 UTC m=+1.319495637" watchObservedRunningTime="2025-11-09 14:14:06.899819888 +0000 UTC m=+1.333422817"
	Nov 09 14:14:06 newest-cni-331530 kubelet[1328]: I1109 14:14:06.900149    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-331530" podStartSLOduration=1.900131496 podStartE2EDuration="1.900131496s" podCreationTimestamp="2025-11-09 14:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:14:06.899965818 +0000 UTC m=+1.333568747" watchObservedRunningTime="2025-11-09 14:14:06.900131496 +0000 UTC m=+1.333734429"
	Nov 09 14:14:10 newest-cni-331530 kubelet[1328]: I1109 14:14:10.325612    1328 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 09 14:14:10 newest-cni-331530 kubelet[1328]: I1109 14:14:10.326374    1328 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 09 14:14:11 newest-cni-331530 kubelet[1328]: I1109 14:14:11.395127    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59572d13-2d29-4a86-bf1d-e75d0dd0d43c-lib-modules\") pod \"kindnet-rmtgg\" (UID: \"59572d13-2d29-4a86-bf1d-e75d0dd0d43c\") " pod="kube-system/kindnet-rmtgg"
	Nov 09 14:14:11 newest-cni-331530 kubelet[1328]: I1109 14:14:11.395157    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6tc8\" (UniqueName: \"kubernetes.io/projected/59572d13-2d29-4a86-bf1d-e75d0dd0d43c-kube-api-access-z6tc8\") pod \"kindnet-rmtgg\" (UID: \"59572d13-2d29-4a86-bf1d-e75d0dd0d43c\") " pod="kube-system/kindnet-rmtgg"
	Nov 09 14:14:11 newest-cni-331530 kubelet[1328]: I1109 14:14:11.395178    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/faf18639-aeb9-4b17-bb1d-32e85cf54dce-lib-modules\") pod \"kube-proxy-fkl5q\" (UID: \"faf18639-aeb9-4b17-bb1d-32e85cf54dce\") " pod="kube-system/kube-proxy-fkl5q"
	Nov 09 14:14:11 newest-cni-331530 kubelet[1328]: I1109 14:14:11.395199    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/59572d13-2d29-4a86-bf1d-e75d0dd0d43c-cni-cfg\") pod \"kindnet-rmtgg\" (UID: \"59572d13-2d29-4a86-bf1d-e75d0dd0d43c\") " pod="kube-system/kindnet-rmtgg"
	Nov 09 14:14:11 newest-cni-331530 kubelet[1328]: I1109 14:14:11.395218    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59572d13-2d29-4a86-bf1d-e75d0dd0d43c-xtables-lock\") pod \"kindnet-rmtgg\" (UID: \"59572d13-2d29-4a86-bf1d-e75d0dd0d43c\") " pod="kube-system/kindnet-rmtgg"
	Nov 09 14:14:11 newest-cni-331530 kubelet[1328]: I1109 14:14:11.395250    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/faf18639-aeb9-4b17-bb1d-32e85cf54dce-kube-proxy\") pod \"kube-proxy-fkl5q\" (UID: \"faf18639-aeb9-4b17-bb1d-32e85cf54dce\") " pod="kube-system/kube-proxy-fkl5q"
	Nov 09 14:14:11 newest-cni-331530 kubelet[1328]: I1109 14:14:11.395271    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tztj\" (UniqueName: \"kubernetes.io/projected/faf18639-aeb9-4b17-bb1d-32e85cf54dce-kube-api-access-2tztj\") pod \"kube-proxy-fkl5q\" (UID: \"faf18639-aeb9-4b17-bb1d-32e85cf54dce\") " pod="kube-system/kube-proxy-fkl5q"
	Nov 09 14:14:11 newest-cni-331530 kubelet[1328]: I1109 14:14:11.395287    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/faf18639-aeb9-4b17-bb1d-32e85cf54dce-xtables-lock\") pod \"kube-proxy-fkl5q\" (UID: \"faf18639-aeb9-4b17-bb1d-32e85cf54dce\") " pod="kube-system/kube-proxy-fkl5q"
	Nov 09 14:14:12 newest-cni-331530 kubelet[1328]: I1109 14:14:12.730630    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fkl5q" podStartSLOduration=1.730610853 podStartE2EDuration="1.730610853s" podCreationTimestamp="2025-11-09 14:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:14:12.730496262 +0000 UTC m=+7.164099196" watchObservedRunningTime="2025-11-09 14:14:12.730610853 +0000 UTC m=+7.164213786"
	Nov 09 14:14:13 newest-cni-331530 kubelet[1328]: I1109 14:14:13.661076    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rmtgg" podStartSLOduration=2.661053234 podStartE2EDuration="2.661053234s" podCreationTimestamp="2025-11-09 14:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:14:12.743214699 +0000 UTC m=+7.176817638" watchObservedRunningTime="2025-11-09 14:14:13.661053234 +0000 UTC m=+8.094656168"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-331530 -n newest-cni-331530
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-331530 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-xvlhm storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-331530 describe pod coredns-66bc5c9577-xvlhm storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-331530 describe pod coredns-66bc5c9577-xvlhm storage-provisioner: exit status 1 (53.880621ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-xvlhm" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-331530 describe pod coredns-66bc5c9577-xvlhm storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-331530 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-331530 --alsologtostderr -v=1: exit status 80 (2.504121691s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-331530 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:14:38.322940  276906 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:14:38.323200  276906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:14:38.323210  276906 out.go:374] Setting ErrFile to fd 2...
	I1109 14:14:38.323215  276906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:14:38.323374  276906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:14:38.323580  276906 out.go:368] Setting JSON to false
	I1109 14:14:38.323624  276906 mustload.go:66] Loading cluster: newest-cni-331530
	I1109 14:14:38.323948  276906 config.go:182] Loaded profile config "newest-cni-331530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:14:38.324308  276906 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:38.341467  276906 host.go:66] Checking if "newest-cni-331530" exists ...
	I1109 14:14:38.341802  276906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:14:38.397180  276906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-09 14:14:38.386936222 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:14:38.397819  276906 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-331530 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1109 14:14:38.400120  276906 out.go:179] * Pausing node newest-cni-331530 ... 
	I1109 14:14:38.401197  276906 host.go:66] Checking if "newest-cni-331530" exists ...
	I1109 14:14:38.401493  276906 ssh_runner.go:195] Run: systemctl --version
	I1109 14:14:38.401528  276906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:38.419182  276906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:38.510785  276906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:14:38.522672  276906 pause.go:52] kubelet running: true
	I1109 14:14:38.522728  276906 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:14:38.674336  276906 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:14:38.674423  276906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:14:38.746713  276906 cri.go:89] found id: "c1bcd1c1af2c6c8b907c2210ab5673952c2685a93862ca3a5da288dfe9071753"
	I1109 14:14:38.746740  276906 cri.go:89] found id: "257f5a6c7120e71163fe8842e15b1e1f938f4b961f83889600976d21dafdf587"
	I1109 14:14:38.746747  276906 cri.go:89] found id: "d22c955680998e8d3360bcb96663b899551daca46c0a944cde9776fb80dea642"
	I1109 14:14:38.746754  276906 cri.go:89] found id: "1526f0e319d499e8e117ed7e56f754a23db76f71937a68ae2f523503033a0707"
	I1109 14:14:38.746771  276906 cri.go:89] found id: "b19747f4b9829ec6cfd1c55b73ac36adb17b5d764e1893d2e89babe3fddbf0d6"
	I1109 14:14:38.746776  276906 cri.go:89] found id: "6f8a1e6423baec933fe43d33cd67337b3c300a7ea659a7a6c65748bb714c0940"
	I1109 14:14:38.746780  276906 cri.go:89] found id: ""
	I1109 14:14:38.746831  276906 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:14:38.759110  276906 retry.go:31] will retry after 198.827972ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:14:38Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:14:38.958533  276906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:14:38.973742  276906 pause.go:52] kubelet running: false
	I1109 14:14:38.973820  276906 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:14:39.118171  276906 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:14:39.118249  276906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:14:39.189874  276906 cri.go:89] found id: "c1bcd1c1af2c6c8b907c2210ab5673952c2685a93862ca3a5da288dfe9071753"
	I1109 14:14:39.189900  276906 cri.go:89] found id: "257f5a6c7120e71163fe8842e15b1e1f938f4b961f83889600976d21dafdf587"
	I1109 14:14:39.189906  276906 cri.go:89] found id: "d22c955680998e8d3360bcb96663b899551daca46c0a944cde9776fb80dea642"
	I1109 14:14:39.189911  276906 cri.go:89] found id: "1526f0e319d499e8e117ed7e56f754a23db76f71937a68ae2f523503033a0707"
	I1109 14:14:39.189916  276906 cri.go:89] found id: "b19747f4b9829ec6cfd1c55b73ac36adb17b5d764e1893d2e89babe3fddbf0d6"
	I1109 14:14:39.189921  276906 cri.go:89] found id: "6f8a1e6423baec933fe43d33cd67337b3c300a7ea659a7a6c65748bb714c0940"
	I1109 14:14:39.189925  276906 cri.go:89] found id: ""
	I1109 14:14:39.189969  276906 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:14:39.200887  276906 retry.go:31] will retry after 393.046026ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:14:39Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:14:39.594417  276906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:14:39.607521  276906 pause.go:52] kubelet running: false
	I1109 14:14:39.607584  276906 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:14:39.737024  276906 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:14:39.737117  276906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:14:39.807954  276906 cri.go:89] found id: "c1bcd1c1af2c6c8b907c2210ab5673952c2685a93862ca3a5da288dfe9071753"
	I1109 14:14:39.807978  276906 cri.go:89] found id: "257f5a6c7120e71163fe8842e15b1e1f938f4b961f83889600976d21dafdf587"
	I1109 14:14:39.807983  276906 cri.go:89] found id: "d22c955680998e8d3360bcb96663b899551daca46c0a944cde9776fb80dea642"
	I1109 14:14:39.807988  276906 cri.go:89] found id: "1526f0e319d499e8e117ed7e56f754a23db76f71937a68ae2f523503033a0707"
	I1109 14:14:39.807992  276906 cri.go:89] found id: "b19747f4b9829ec6cfd1c55b73ac36adb17b5d764e1893d2e89babe3fddbf0d6"
	I1109 14:14:39.807996  276906 cri.go:89] found id: "6f8a1e6423baec933fe43d33cd67337b3c300a7ea659a7a6c65748bb714c0940"
	I1109 14:14:39.807999  276906 cri.go:89] found id: ""
	I1109 14:14:39.808047  276906 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:14:39.821818  276906 retry.go:31] will retry after 656.348103ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:14:39Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:14:40.478782  276906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:14:40.495430  276906 pause.go:52] kubelet running: false
	I1109 14:14:40.495486  276906 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:14:40.660005  276906 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:14:40.660078  276906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:14:40.741127  276906 cri.go:89] found id: "c1bcd1c1af2c6c8b907c2210ab5673952c2685a93862ca3a5da288dfe9071753"
	I1109 14:14:40.741151  276906 cri.go:89] found id: "257f5a6c7120e71163fe8842e15b1e1f938f4b961f83889600976d21dafdf587"
	I1109 14:14:40.741156  276906 cri.go:89] found id: "d22c955680998e8d3360bcb96663b899551daca46c0a944cde9776fb80dea642"
	I1109 14:14:40.741160  276906 cri.go:89] found id: "1526f0e319d499e8e117ed7e56f754a23db76f71937a68ae2f523503033a0707"
	I1109 14:14:40.741163  276906 cri.go:89] found id: "b19747f4b9829ec6cfd1c55b73ac36adb17b5d764e1893d2e89babe3fddbf0d6"
	I1109 14:14:40.741167  276906 cri.go:89] found id: "6f8a1e6423baec933fe43d33cd67337b3c300a7ea659a7a6c65748bb714c0940"
	I1109 14:14:40.741171  276906 cri.go:89] found id: ""
	I1109 14:14:40.741213  276906 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:14:40.757917  276906 out.go:203] 
	W1109 14:14:40.759077  276906 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:14:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:14:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 14:14:40.759095  276906 out.go:285] * 
	* 
	W1109 14:14:40.765332  276906 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 14:14:40.766541  276906 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-331530 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-331530
helpers_test.go:243: (dbg) docker inspect newest-cni-331530:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa",
	        "Created": "2025-11-09T14:13:46.9311742Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274740,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:14:27.457777089Z",
	            "FinishedAt": "2025-11-09T14:14:26.538788189Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa/hostname",
	        "HostsPath": "/var/lib/docker/containers/b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa/hosts",
	        "LogPath": "/var/lib/docker/containers/b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa/b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa-json.log",
	        "Name": "/newest-cni-331530",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-331530:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-331530",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa",
	                "LowerDir": "/var/lib/docker/overlay2/270c7912b32411650ef3bf9dcee1cc3fe1f7e282c01edb904b3e13882047bd17-init/diff:/var/lib/docker/overlay2/d0c6d4b4ffbfb6af64ec3408030fc54111fe30d500088a5ae947d598cfc72f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/270c7912b32411650ef3bf9dcee1cc3fe1f7e282c01edb904b3e13882047bd17/merged",
	                "UpperDir": "/var/lib/docker/overlay2/270c7912b32411650ef3bf9dcee1cc3fe1f7e282c01edb904b3e13882047bd17/diff",
	                "WorkDir": "/var/lib/docker/overlay2/270c7912b32411650ef3bf9dcee1cc3fe1f7e282c01edb904b3e13882047bd17/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-331530",
	                "Source": "/var/lib/docker/volumes/newest-cni-331530/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-331530",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-331530",
	                "name.minikube.sigs.k8s.io": "newest-cni-331530",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5e7537594de818bc57f9aaf12d5c94b2d2df242669b2f7c8c1f28c07a9c1c340",
	            "SandboxKey": "/var/run/docker/netns/5e7537594de8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-331530": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:fa:4a:50:07:0e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "48111e278cbe43aa4a69b8079dbb61289459a16d778ee4d9d738546cd26897c8",
	                    "EndpointID": "4438a4d5c91cb0524395802703be6c94ad8e0e7cca8dcd7fcec700410aa59570",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-331530",
	                        "b0c3dbe7b9b7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-331530 -n newest-cni-331530
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-331530 -n newest-cni-331530: exit status 2 (383.268666ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-331530 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-331530 logs -n 25: (1.003150101s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-565545                                                                                                                                                                                                               │ disable-driver-mounts-565545 │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p default-k8s-diff-port-326524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:14 UTC │
	│ addons  │ enable metrics-server -p embed-certs-273180 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ stop    │ -p embed-certs-273180 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ image   │ old-k8s-version-169816 image list --format=json                                                                                                                                                                                               │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ pause   │ -p old-k8s-version-169816 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ delete  │ -p old-k8s-version-169816                                                                                                                                                                                                                     │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p old-k8s-version-169816                                                                                                                                                                                                                     │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p newest-cni-331530 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:14 UTC │
	│ image   │ no-preload-152932 image list --format=json                                                                                                                                                                                                    │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ pause   │ -p no-preload-152932 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-273180 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p embed-certs-273180 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:14 UTC │
	│ delete  │ -p no-preload-152932                                                                                                                                                                                                                          │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p no-preload-152932                                                                                                                                                                                                                          │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p auto-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:14 UTC │
	│ addons  │ enable metrics-server -p newest-cni-331530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	│ stop    │ -p newest-cni-331530 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ addons  │ enable dashboard -p newest-cni-331530 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ start   │ -p newest-cni-331530 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ image   │ newest-cni-331530 image list --format=json                                                                                                                                                                                                    │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ pause   │ -p newest-cni-331530 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-326524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	│ ssh     │ -p auto-593530 pgrep -a kubelet                                                                                                                                                                                                               │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ stop    │ -p default-k8s-diff-port-326524 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:14:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:14:27.213300  274539 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:14:27.213403  274539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:14:27.213415  274539 out.go:374] Setting ErrFile to fd 2...
	I1109 14:14:27.213421  274539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:14:27.213711  274539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:14:27.214223  274539 out.go:368] Setting JSON to false
	I1109 14:14:27.215819  274539 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3417,"bootTime":1762694250,"procs":336,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:14:27.215915  274539 start.go:143] virtualization: kvm guest
	I1109 14:14:27.221200  274539 out.go:179] * [newest-cni-331530] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:14:27.222608  274539 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:14:27.222670  274539 notify.go:221] Checking for updates...
	I1109 14:14:27.225558  274539 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:14:27.226971  274539 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:14:27.228145  274539 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 14:14:27.229214  274539 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:14:27.230272  274539 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:14:27.231916  274539 config.go:182] Loaded profile config "newest-cni-331530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:14:27.232624  274539 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:14:27.261484  274539 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 14:14:27.261617  274539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:14:27.321108  274539 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-09 14:14:27.311595109 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:14:27.321203  274539 docker.go:319] overlay module found
	I1109 14:14:27.322616  274539 out.go:179] * Using the docker driver based on existing profile
	I1109 14:14:27.323699  274539 start.go:309] selected driver: docker
	I1109 14:14:27.323718  274539 start.go:930] validating driver "docker" against &{Name:newest-cni-331530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-331530 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:14:27.323819  274539 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:14:27.324328  274539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:14:27.383541  274539 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-09 14:14:27.37426016 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:14:27.383948  274539 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1109 14:14:27.383994  274539 cni.go:84] Creating CNI manager for ""
	I1109 14:14:27.384056  274539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:14:27.384102  274539 start.go:353] cluster config:
	{Name:newest-cni-331530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-331530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:14:27.385676  274539 out.go:179] * Starting "newest-cni-331530" primary control-plane node in "newest-cni-331530" cluster
	I1109 14:14:27.387354  274539 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:14:27.388465  274539 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:14:27.389520  274539 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:14:27.389549  274539 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 14:14:27.389567  274539 cache.go:65] Caching tarball of preloaded images
	I1109 14:14:27.389604  274539 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:14:27.389678  274539 preload.go:238] Found /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 14:14:27.389697  274539 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:14:27.389810  274539 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/config.json ...
	I1109 14:14:27.411564  274539 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:14:27.411584  274539 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:14:27.411603  274539 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:14:27.411628  274539 start.go:360] acquireMachinesLock for newest-cni-331530: {Name:mk7b6183552a57a627a0de774642a3a4314af43c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:14:27.411719  274539 start.go:364] duration metric: took 46.418µs to acquireMachinesLock for "newest-cni-331530"
	I1109 14:14:27.411741  274539 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:14:27.411750  274539 fix.go:54] fixHost starting: 
	I1109 14:14:27.411979  274539 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:27.429672  274539 fix.go:112] recreateIfNeeded on newest-cni-331530: state=Stopped err=<nil>
	W1109 14:14:27.429710  274539 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:14:25.001006  268505 addons.go:515] duration metric: took 478.538847ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1109 14:14:25.306166  268505 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-593530" context rescaled to 1 replicas
	W1109 14:14:26.806326  268505 node_ready.go:57] node "auto-593530" has "Ready":"False" status (will retry)
	W1109 14:14:28.806731  268505 node_ready.go:57] node "auto-593530" has "Ready":"False" status (will retry)
	I1109 14:14:27.273728  256773 node_ready.go:49] node "default-k8s-diff-port-326524" is "Ready"
	I1109 14:14:27.273755  256773 node_ready.go:38] duration metric: took 40.502705469s for node "default-k8s-diff-port-326524" to be "Ready" ...
	I1109 14:14:27.273773  256773 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:14:27.273823  256773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:14:27.288739  256773 api_server.go:72] duration metric: took 40.987914742s to wait for apiserver process to appear ...
	I1109 14:14:27.288767  256773 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:14:27.288790  256773 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1109 14:14:27.293996  256773 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1109 14:14:27.294990  256773 api_server.go:141] control plane version: v1.34.1
	I1109 14:14:27.295010  256773 api_server.go:131] duration metric: took 6.236021ms to wait for apiserver health ...
	I1109 14:14:27.295018  256773 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:14:27.298258  256773 system_pods.go:59] 8 kube-system pods found
	I1109 14:14:27.298294  256773 system_pods.go:61] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:27.298303  256773 system_pods.go:61] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running
	I1109 14:14:27.298310  256773 system_pods.go:61] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running
	I1109 14:14:27.298316  256773 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running
	I1109 14:14:27.298324  256773 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running
	I1109 14:14:27.298333  256773 system_pods.go:61] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running
	I1109 14:14:27.298339  256773 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running
	I1109 14:14:27.298346  256773 system_pods.go:61] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:27.298358  256773 system_pods.go:74] duration metric: took 3.333122ms to wait for pod list to return data ...
	I1109 14:14:27.298370  256773 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:14:27.300717  256773 default_sa.go:45] found service account: "default"
	I1109 14:14:27.300736  256773 default_sa.go:55] duration metric: took 2.360756ms for default service account to be created ...
	I1109 14:14:27.300745  256773 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:14:27.304529  256773 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:27.304592  256773 system_pods.go:89] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:27.304601  256773 system_pods.go:89] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running
	I1109 14:14:27.304615  256773 system_pods.go:89] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running
	I1109 14:14:27.304624  256773 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running
	I1109 14:14:27.304629  256773 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running
	I1109 14:14:27.304634  256773 system_pods.go:89] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running
	I1109 14:14:27.304656  256773 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running
	I1109 14:14:27.304665  256773 system_pods.go:89] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:27.304688  256773 retry.go:31] will retry after 236.393087ms: missing components: kube-dns
	I1109 14:14:27.545158  256773 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:27.545217  256773 system_pods.go:89] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:27.545229  256773 system_pods.go:89] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running
	I1109 14:14:27.545238  256773 system_pods.go:89] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running
	I1109 14:14:27.545244  256773 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running
	I1109 14:14:27.545278  256773 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running
	I1109 14:14:27.545288  256773 system_pods.go:89] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running
	I1109 14:14:27.545294  256773 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running
	I1109 14:14:27.545305  256773 system_pods.go:89] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:27.545324  256773 retry.go:31] will retry after 241.871609ms: missing components: kube-dns
	I1109 14:14:27.792009  256773 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:27.792039  256773 system_pods.go:89] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:27.792046  256773 system_pods.go:89] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running
	I1109 14:14:27.792052  256773 system_pods.go:89] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running
	I1109 14:14:27.792055  256773 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running
	I1109 14:14:27.792059  256773 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running
	I1109 14:14:27.792062  256773 system_pods.go:89] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running
	I1109 14:14:27.792066  256773 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running
	I1109 14:14:27.792071  256773 system_pods.go:89] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:27.792109  256773 retry.go:31] will retry after 430.689591ms: missing components: kube-dns
	I1109 14:14:28.226855  256773 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:28.226889  256773 system_pods.go:89] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:28.226897  256773 system_pods.go:89] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running
	I1109 14:14:28.226906  256773 system_pods.go:89] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running
	I1109 14:14:28.226913  256773 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running
	I1109 14:14:28.226926  256773 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running
	I1109 14:14:28.226931  256773 system_pods.go:89] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running
	I1109 14:14:28.226936  256773 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running
	I1109 14:14:28.226953  256773 system_pods.go:89] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:28.226976  256773 retry.go:31] will retry after 511.736387ms: missing components: kube-dns
	I1109 14:14:28.742716  256773 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:28.742741  256773 system_pods.go:89] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Running
	I1109 14:14:28.742746  256773 system_pods.go:89] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running
	I1109 14:14:28.742759  256773 system_pods.go:89] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running
	I1109 14:14:28.742763  256773 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running
	I1109 14:14:28.742767  256773 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running
	I1109 14:14:28.742770  256773 system_pods.go:89] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running
	I1109 14:14:28.742773  256773 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running
	I1109 14:14:28.742776  256773 system_pods.go:89] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Running
	I1109 14:14:28.742784  256773 system_pods.go:126] duration metric: took 1.442032955s to wait for k8s-apps to be running ...
	I1109 14:14:28.742793  256773 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:14:28.742832  256773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:14:28.755945  256773 system_svc.go:56] duration metric: took 13.142064ms WaitForService to wait for kubelet
	I1109 14:14:28.755970  256773 kubeadm.go:587] duration metric: took 42.455149759s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:14:28.755990  256773 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:14:28.758414  256773 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:14:28.758439  256773 node_conditions.go:123] node cpu capacity is 8
	I1109 14:14:28.758458  256773 node_conditions.go:105] duration metric: took 2.459628ms to run NodePressure ...
	I1109 14:14:28.758473  256773 start.go:242] waiting for startup goroutines ...
	I1109 14:14:28.758487  256773 start.go:247] waiting for cluster config update ...
	I1109 14:14:28.758503  256773 start.go:256] writing updated cluster config ...
	I1109 14:14:28.758756  256773 ssh_runner.go:195] Run: rm -f paused
	I1109 14:14:28.762372  256773 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:14:28.765747  256773 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z8lkx" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:28.769875  256773 pod_ready.go:94] pod "coredns-66bc5c9577-z8lkx" is "Ready"
	I1109 14:14:28.769897  256773 pod_ready.go:86] duration metric: took 4.124753ms for pod "coredns-66bc5c9577-z8lkx" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:28.771797  256773 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:28.775231  256773 pod_ready.go:94] pod "etcd-default-k8s-diff-port-326524" is "Ready"
	I1109 14:14:28.775252  256773 pod_ready.go:86] duration metric: took 3.433428ms for pod "etcd-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:28.777156  256773 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:28.780365  256773 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-326524" is "Ready"
	I1109 14:14:28.780382  256773 pod_ready.go:86] duration metric: took 3.206399ms for pod "kube-apiserver-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:28.782110  256773 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:29.166343  256773 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-326524" is "Ready"
	I1109 14:14:29.166367  256773 pod_ready.go:86] duration metric: took 384.238658ms for pod "kube-controller-manager-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:29.366197  256773 pod_ready.go:83] waiting for pod "kube-proxy-n95wb" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:29.766136  256773 pod_ready.go:94] pod "kube-proxy-n95wb" is "Ready"
	I1109 14:14:29.766157  256773 pod_ready.go:86] duration metric: took 399.937804ms for pod "kube-proxy-n95wb" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:29.966186  256773 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:30.366783  256773 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-326524" is "Ready"
	I1109 14:14:30.366806  256773 pod_ready.go:86] duration metric: took 400.591526ms for pod "kube-scheduler-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:30.366817  256773 pod_ready.go:40] duration metric: took 1.604418075s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:14:30.409181  256773 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:14:30.411042  256773 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-326524" cluster and "default" namespace by default
	W1109 14:14:28.006842  264151 pod_ready.go:104] pod "coredns-66bc5c9577-bbnm4" is not "Ready", error: <nil>
	W1109 14:14:30.504933  264151 pod_ready.go:104] pod "coredns-66bc5c9577-bbnm4" is not "Ready", error: <nil>
	I1109 14:14:27.431740  274539 out.go:252] * Restarting existing docker container for "newest-cni-331530" ...
	I1109 14:14:27.431804  274539 cli_runner.go:164] Run: docker start newest-cni-331530
	I1109 14:14:27.723911  274539 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:27.742256  274539 kic.go:430] container "newest-cni-331530" state is running.
	I1109 14:14:27.742606  274539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-331530
	I1109 14:14:27.761864  274539 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/config.json ...
	I1109 14:14:27.762097  274539 machine.go:94] provisionDockerMachine start ...
	I1109 14:14:27.762181  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:27.781171  274539 main.go:143] libmachine: Using SSH client type: native
	I1109 14:14:27.781382  274539 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1109 14:14:27.781392  274539 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:14:27.782070  274539 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51370->127.0.0.1:33100: read: connection reset by peer
	I1109 14:14:30.912783  274539 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-331530
	
	I1109 14:14:30.912818  274539 ubuntu.go:182] provisioning hostname "newest-cni-331530"
	I1109 14:14:30.912874  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:30.931520  274539 main.go:143] libmachine: Using SSH client type: native
	I1109 14:14:30.931801  274539 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1109 14:14:30.931833  274539 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-331530 && echo "newest-cni-331530" | sudo tee /etc/hostname
	I1109 14:14:31.065432  274539 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-331530
	
	I1109 14:14:31.065515  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:31.083548  274539 main.go:143] libmachine: Using SSH client type: native
	I1109 14:14:31.083824  274539 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1109 14:14:31.083853  274539 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-331530' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-331530/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-331530' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:14:31.208936  274539 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:14:31.208962  274539 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-5854/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-5854/.minikube}
	I1109 14:14:31.208980  274539 ubuntu.go:190] setting up certificates
	I1109 14:14:31.208988  274539 provision.go:84] configureAuth start
	I1109 14:14:31.209030  274539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-331530
	I1109 14:14:31.228082  274539 provision.go:143] copyHostCerts
	I1109 14:14:31.228132  274539 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem, removing ...
	I1109 14:14:31.228148  274539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem
	I1109 14:14:31.228210  274539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem (1679 bytes)
	I1109 14:14:31.228302  274539 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem, removing ...
	I1109 14:14:31.228311  274539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem
	I1109 14:14:31.228339  274539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem (1078 bytes)
	I1109 14:14:31.228431  274539 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem, removing ...
	I1109 14:14:31.228447  274539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem
	I1109 14:14:31.228477  274539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem (1123 bytes)
	I1109 14:14:31.228542  274539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem org=jenkins.newest-cni-331530 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-331530]
	I1109 14:14:31.677040  274539 provision.go:177] copyRemoteCerts
	I1109 14:14:31.677126  274539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:14:31.677177  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:31.695249  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:31.788325  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 14:14:31.805579  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1109 14:14:31.822478  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:14:31.839249  274539 provision.go:87] duration metric: took 630.251605ms to configureAuth
	I1109 14:14:31.839268  274539 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:14:31.839440  274539 config.go:182] Loaded profile config "newest-cni-331530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:14:31.839545  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:31.858063  274539 main.go:143] libmachine: Using SSH client type: native
	I1109 14:14:31.858358  274539 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1109 14:14:31.858385  274539 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:14:32.123352  274539 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:14:32.123375  274539 machine.go:97] duration metric: took 4.361263068s to provisionDockerMachine
	I1109 14:14:32.123388  274539 start.go:293] postStartSetup for "newest-cni-331530" (driver="docker")
	I1109 14:14:32.123400  274539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:14:32.123449  274539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:14:32.123487  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:32.141670  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:32.233243  274539 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:14:32.236496  274539 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:14:32.236518  274539 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:14:32.236527  274539 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/addons for local assets ...
	I1109 14:14:32.236571  274539 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/files for local assets ...
	I1109 14:14:32.236651  274539 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem -> 93652.pem in /etc/ssl/certs
	I1109 14:14:32.236742  274539 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:14:32.243898  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:14:32.260427  274539 start.go:296] duration metric: took 137.029267ms for postStartSetup
	I1109 14:14:32.260497  274539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:14:32.260537  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:32.278838  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:32.367429  274539 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:14:32.371740  274539 fix.go:56] duration metric: took 4.95998604s for fixHost
	I1109 14:14:32.371766  274539 start.go:83] releasing machines lock for "newest-cni-331530", held for 4.96003446s
	I1109 14:14:32.371820  274539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-331530
	I1109 14:14:32.390330  274539 ssh_runner.go:195] Run: cat /version.json
	I1109 14:14:32.390386  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:32.390407  274539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:14:32.390472  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:32.407704  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:32.408755  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:32.564613  274539 ssh_runner.go:195] Run: systemctl --version
	I1109 14:14:32.571012  274539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:14:32.606920  274539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:14:32.612027  274539 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:14:32.612087  274539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:14:32.620330  274539 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:14:32.620351  274539 start.go:496] detecting cgroup driver to use...
	I1109 14:14:32.620392  274539 detect.go:190] detected "systemd" cgroup driver on host os
	I1109 14:14:32.620432  274539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:14:32.633961  274539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:14:32.645344  274539 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:14:32.645400  274539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:14:32.658618  274539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:14:32.670206  274539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:14:32.749620  274539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:14:32.829965  274539 docker.go:234] disabling docker service ...
	I1109 14:14:32.830022  274539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:14:32.843428  274539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:14:32.854871  274539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:14:32.939565  274539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:14:33.019395  274539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:14:33.031615  274539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:14:33.045402  274539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:14:33.045488  274539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.053882  274539 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1109 14:14:33.053932  274539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.062452  274539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.070568  274539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.078831  274539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:14:33.086248  274539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.094137  274539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.102098  274539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.110267  274539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:14:33.117230  274539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:14:33.123967  274539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:14:33.203147  274539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:14:33.313625  274539 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:14:33.313705  274539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:14:33.317538  274539 start.go:564] Will wait 60s for crictl version
	I1109 14:14:33.317594  274539 ssh_runner.go:195] Run: which crictl
	I1109 14:14:33.321065  274539 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:14:33.345284  274539 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:14:33.345340  274539 ssh_runner.go:195] Run: crio --version
	I1109 14:14:33.371633  274539 ssh_runner.go:195] Run: crio --version
	I1109 14:14:33.400250  274539 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:14:33.401590  274539 cli_runner.go:164] Run: docker network inspect newest-cni-331530 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:14:33.419331  274539 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1109 14:14:33.423271  274539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:14:33.434564  274539 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1109 14:14:32.504467  264151 pod_ready.go:94] pod "coredns-66bc5c9577-bbnm4" is "Ready"
	I1109 14:14:32.504496  264151 pod_ready.go:86] duration metric: took 34.504778764s for pod "coredns-66bc5c9577-bbnm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.506955  264151 pod_ready.go:83] waiting for pod "etcd-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.510570  264151 pod_ready.go:94] pod "etcd-embed-certs-273180" is "Ready"
	I1109 14:14:32.510590  264151 pod_ready.go:86] duration metric: took 3.614216ms for pod "etcd-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.512402  264151 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.515898  264151 pod_ready.go:94] pod "kube-apiserver-embed-certs-273180" is "Ready"
	I1109 14:14:32.515921  264151 pod_ready.go:86] duration metric: took 3.495327ms for pod "kube-apiserver-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.517532  264151 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.703898  264151 pod_ready.go:94] pod "kube-controller-manager-embed-certs-273180" is "Ready"
	I1109 14:14:32.703925  264151 pod_ready.go:86] duration metric: took 186.376206ms for pod "kube-controller-manager-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.902976  264151 pod_ready.go:83] waiting for pod "kube-proxy-k6zsl" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:33.303238  264151 pod_ready.go:94] pod "kube-proxy-k6zsl" is "Ready"
	I1109 14:14:33.303266  264151 pod_ready.go:86] duration metric: took 400.264059ms for pod "kube-proxy-k6zsl" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:33.503415  264151 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:33.903284  264151 pod_ready.go:94] pod "kube-scheduler-embed-certs-273180" is "Ready"
	I1109 14:14:33.903309  264151 pod_ready.go:86] duration metric: took 399.863623ms for pod "kube-scheduler-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:33.903322  264151 pod_ready.go:40] duration metric: took 35.907389797s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:14:33.951503  264151 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:14:33.954226  264151 out.go:179] * Done! kubectl is now configured to use "embed-certs-273180" cluster and "default" namespace by default
	W1109 14:14:31.306521  268505 node_ready.go:57] node "auto-593530" has "Ready":"False" status (will retry)
	W1109 14:14:33.807024  268505 node_ready.go:57] node "auto-593530" has "Ready":"False" status (will retry)
	I1109 14:14:33.435777  274539 kubeadm.go:884] updating cluster {Name:newest-cni-331530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-331530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:14:33.436335  274539 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:14:33.436448  274539 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:14:33.468543  274539 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:14:33.468564  274539 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:14:33.468605  274539 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:14:33.493302  274539 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:14:33.493321  274539 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:14:33.493331  274539 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1109 14:14:33.493431  274539 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-331530 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-331530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:14:33.493502  274539 ssh_runner.go:195] Run: crio config
	I1109 14:14:33.539072  274539 cni.go:84] Creating CNI manager for ""
	I1109 14:14:33.539095  274539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:14:33.539111  274539 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1109 14:14:33.539142  274539 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-331530 NodeName:newest-cni-331530 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:14:33.539279  274539 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-331530"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:14:33.539348  274539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:14:33.547453  274539 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:14:33.547513  274539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:14:33.554838  274539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1109 14:14:33.567355  274539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:14:33.579294  274539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1109 14:14:33.590819  274539 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:14:33.594197  274539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:14:33.603341  274539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:14:33.683905  274539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:14:33.721780  274539 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530 for IP: 192.168.76.2
	I1109 14:14:33.721801  274539 certs.go:195] generating shared ca certs ...
	I1109 14:14:33.721820  274539 certs.go:227] acquiring lock for ca certs: {Name:mk1fbc7fb5aaf0e87090d75145f91c095ef07289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:33.721968  274539 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key
	I1109 14:14:33.722021  274539 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key
	I1109 14:14:33.722032  274539 certs.go:257] generating profile certs ...
	I1109 14:14:33.722135  274539 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/client.key
	I1109 14:14:33.722199  274539 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.key.5fb0b4cb
	I1109 14:14:33.722252  274539 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/proxy-client.key
	I1109 14:14:33.722385  274539 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem (1338 bytes)
	W1109 14:14:33.722438  274539 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365_empty.pem, impossibly tiny 0 bytes
	I1109 14:14:33.722453  274539 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:14:33.722488  274539 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 14:14:33.722523  274539 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:14:33.722555  274539 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 14:14:33.722611  274539 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:14:33.723238  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:14:33.742105  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:14:33.760423  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:14:33.780350  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:14:33.804462  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1109 14:14:33.822573  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:14:33.838810  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:14:33.856050  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 14:14:33.873441  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /usr/share/ca-certificates/93652.pem (1708 bytes)
	I1109 14:14:33.889777  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:14:33.907245  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem --> /usr/share/ca-certificates/9365.pem (1338 bytes)
	I1109 14:14:33.927298  274539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:14:33.939424  274539 ssh_runner.go:195] Run: openssl version
	I1109 14:14:33.945995  274539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9365.pem && ln -fs /usr/share/ca-certificates/9365.pem /etc/ssl/certs/9365.pem"
	I1109 14:14:33.954688  274539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9365.pem
	I1109 14:14:33.958715  274539 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:34 /usr/share/ca-certificates/9365.pem
	I1109 14:14:33.958770  274539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9365.pem
	I1109 14:14:33.997357  274539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9365.pem /etc/ssl/certs/51391683.0"
	I1109 14:14:34.005827  274539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93652.pem && ln -fs /usr/share/ca-certificates/93652.pem /etc/ssl/certs/93652.pem"
	I1109 14:14:34.015472  274539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93652.pem
	I1109 14:14:34.019587  274539 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:34 /usr/share/ca-certificates/93652.pem
	I1109 14:14:34.019636  274539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93652.pem
	I1109 14:14:34.057846  274539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93652.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:14:34.066728  274539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:14:34.074987  274539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:14:34.078819  274539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:14:34.078868  274539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:14:34.114498  274539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:14:34.122522  274539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:14:34.126262  274539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:14:34.160486  274539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:14:34.196162  274539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:14:34.241808  274539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:14:34.290982  274539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:14:34.342552  274539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:14:34.396435  274539 kubeadm.go:401] StartCluster: {Name:newest-cni-331530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-331530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:14:34.396545  274539 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:14:34.396609  274539 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:14:34.431828  274539 cri.go:89] found id: "d22c955680998e8d3360bcb96663b899551daca46c0a944cde9776fb80dea642"
	I1109 14:14:34.431852  274539 cri.go:89] found id: "1526f0e319d499e8e117ed7e56f754a23db76f71937a68ae2f523503033a0707"
	I1109 14:14:34.431858  274539 cri.go:89] found id: "b19747f4b9829ec6cfd1c55b73ac36adb17b5d764e1893d2e89babe3fddbf0d6"
	I1109 14:14:34.431862  274539 cri.go:89] found id: "6f8a1e6423baec933fe43d33cd67337b3c300a7ea659a7a6c65748bb714c0940"
	I1109 14:14:34.431867  274539 cri.go:89] found id: ""
	I1109 14:14:34.431909  274539 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 14:14:34.443922  274539 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:14:34Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:14:34.443987  274539 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:14:34.451921  274539 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:14:34.451945  274539 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:14:34.451985  274539 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:14:34.459577  274539 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:14:34.460472  274539 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-331530" does not appear in /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:14:34.461194  274539 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-5854/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-331530" cluster setting kubeconfig missing "newest-cni-331530" context setting]
	I1109 14:14:34.462308  274539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:34.463959  274539 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:14:34.471814  274539 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1109 14:14:34.471839  274539 kubeadm.go:602] duration metric: took 19.88831ms to restartPrimaryControlPlane
	I1109 14:14:34.471851  274539 kubeadm.go:403] duration metric: took 75.424288ms to StartCluster
	I1109 14:14:34.471865  274539 settings.go:142] acquiring lock: {Name:mk4e77b290eba64589404bd2a5f48c72505ab262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:34.471929  274539 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:14:34.474250  274539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:34.474493  274539 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:14:34.474569  274539 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:14:34.474679  274539 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-331530"
	I1109 14:14:34.474698  274539 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-331530"
	W1109 14:14:34.474707  274539 addons.go:248] addon storage-provisioner should already be in state true
	I1109 14:14:34.474710  274539 addons.go:70] Setting dashboard=true in profile "newest-cni-331530"
	I1109 14:14:34.474733  274539 addons.go:239] Setting addon dashboard=true in "newest-cni-331530"
	I1109 14:14:34.474741  274539 host.go:66] Checking if "newest-cni-331530" exists ...
	W1109 14:14:34.474742  274539 addons.go:248] addon dashboard should already be in state true
	I1109 14:14:34.474741  274539 addons.go:70] Setting default-storageclass=true in profile "newest-cni-331530"
	I1109 14:14:34.474766  274539 config.go:182] Loaded profile config "newest-cni-331530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:14:34.474768  274539 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-331530"
	I1109 14:14:34.474772  274539 host.go:66] Checking if "newest-cni-331530" exists ...
	I1109 14:14:34.475110  274539 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:34.475311  274539 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:34.475379  274539 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:34.476931  274539 out.go:179] * Verifying Kubernetes components...
	I1109 14:14:34.478118  274539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:14:34.498669  274539 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1109 14:14:34.499967  274539 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:14:34.501000  274539 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1109 14:14:34.501055  274539 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:14:34.501069  274539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:14:34.501108  274539 addons.go:239] Setting addon default-storageclass=true in "newest-cni-331530"
	I1109 14:14:34.501118  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	W1109 14:14:34.501127  274539 addons.go:248] addon default-storageclass should already be in state true
	I1109 14:14:34.501153  274539 host.go:66] Checking if "newest-cni-331530" exists ...
	I1109 14:14:34.501594  274539 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:34.501929  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1109 14:14:34.501945  274539 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1109 14:14:34.501995  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:34.535820  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:34.537755  274539 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:14:34.537777  274539 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:14:34.537828  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:34.541015  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:34.559704  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:34.613162  274539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:14:34.625580  274539 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:14:34.625672  274539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:14:34.636706  274539 api_server.go:72] duration metric: took 162.184344ms to wait for apiserver process to appear ...
	I1109 14:14:34.636730  274539 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:14:34.636748  274539 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:14:34.645161  274539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:14:34.648499  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1109 14:14:34.648519  274539 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1109 14:14:34.661828  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1109 14:14:34.661852  274539 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1109 14:14:34.666411  274539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:14:34.675831  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1109 14:14:34.675849  274539 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1109 14:14:34.690000  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1109 14:14:34.690016  274539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1109 14:14:34.705252  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1109 14:14:34.705272  274539 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1109 14:14:34.719515  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1109 14:14:34.719540  274539 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1109 14:14:34.732790  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1109 14:14:34.732819  274539 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1109 14:14:34.745667  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1109 14:14:34.745693  274539 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1109 14:14:34.757973  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:14:34.757995  274539 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1109 14:14:34.770563  274539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:14:36.171483  274539 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1109 14:14:36.171528  274539 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1109 14:14:36.171551  274539 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:14:36.194157  274539 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1109 14:14:36.194201  274539 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1109 14:14:36.637271  274539 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:14:36.641625  274539 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:14:36.641685  274539 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:14:36.682804  274539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.037614109s)
	I1109 14:14:36.682849  274539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.016394225s)
	I1109 14:14:36.682940  274539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.912351833s)
	I1109 14:14:36.684411  274539 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-331530 addons enable metrics-server
	
	I1109 14:14:36.692971  274539 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1109 14:14:36.694141  274539 addons.go:515] duration metric: took 2.219578634s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1109 14:14:37.137025  274539 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:14:37.141940  274539 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:14:37.141967  274539 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:14:37.637235  274539 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:14:37.641207  274539 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1109 14:14:37.642117  274539 api_server.go:141] control plane version: v1.34.1
	I1109 14:14:37.642139  274539 api_server.go:131] duration metric: took 3.005402836s to wait for apiserver health ...
	I1109 14:14:37.642147  274539 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:14:37.645738  274539 system_pods.go:59] 8 kube-system pods found
	I1109 14:14:37.645773  274539 system_pods.go:61] "coredns-66bc5c9577-xvlhm" [ab5d6559-9c58-477e-bae9-e4cedcc2832e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1109 14:14:37.645784  274539 system_pods.go:61] "etcd-newest-cni-331530" [3508f193-5b63-49b0-bbc3-f94d167d8b0c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:14:37.645797  274539 system_pods.go:61] "kindnet-rmtgg" [59572d13-2d29-4a86-bf1d-e75d0dd0d43c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:14:37.645806  274539 system_pods.go:61] "kube-apiserver-newest-cni-331530" [d47aa681-ce72-491a-847d-050b27ac3607] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:14:37.645818  274539 system_pods.go:61] "kube-controller-manager-newest-cni-331530" [ad70a3ff-3aba-485c-a18a-79b65fb30455] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:14:37.645827  274539 system_pods.go:61] "kube-proxy-fkl5q" [faf18639-aeb9-4b17-bb1d-32e85cf54dce] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1109 14:14:37.645837  274539 system_pods.go:61] "kube-scheduler-newest-cni-331530" [e5a8d839-20ee-4400-81dd-abcc742b5c2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:14:37.645846  274539 system_pods.go:61] "storage-provisioner" [77fd8da7-bb4b-4c95-beb6-7d28e7eaabbb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1109 14:14:37.645854  274539 system_pods.go:74] duration metric: took 3.700675ms to wait for pod list to return data ...
	I1109 14:14:37.645865  274539 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:14:37.647993  274539 default_sa.go:45] found service account: "default"
	I1109 14:14:37.648014  274539 default_sa.go:55] duration metric: took 2.143672ms for default service account to be created ...
	I1109 14:14:37.648025  274539 kubeadm.go:587] duration metric: took 3.173506607s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1109 14:14:37.648041  274539 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:14:37.650075  274539 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:14:37.650093  274539 node_conditions.go:123] node cpu capacity is 8
	I1109 14:14:37.650104  274539 node_conditions.go:105] duration metric: took 2.058154ms to run NodePressure ...
	I1109 14:14:37.650116  274539 start.go:242] waiting for startup goroutines ...
	I1109 14:14:37.650129  274539 start.go:247] waiting for cluster config update ...
	I1109 14:14:37.650142  274539 start.go:256] writing updated cluster config ...
	I1109 14:14:37.650382  274539 ssh_runner.go:195] Run: rm -f paused
	I1109 14:14:37.702340  274539 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:14:37.703684  274539 out.go:179] * Done! kubectl is now configured to use "newest-cni-331530" cluster and "default" namespace by default
	I1109 14:14:35.806075  268505 node_ready.go:49] node "auto-593530" is "Ready"
	I1109 14:14:35.806106  268505 node_ready.go:38] duration metric: took 11.002864775s for node "auto-593530" to be "Ready" ...
	I1109 14:14:35.806123  268505 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:14:35.806179  268505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:14:35.818063  268505 api_server.go:72] duration metric: took 11.295640515s to wait for apiserver process to appear ...
	I1109 14:14:35.818095  268505 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:14:35.818111  268505 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1109 14:14:35.823855  268505 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1109 14:14:35.825169  268505 api_server.go:141] control plane version: v1.34.1
	I1109 14:14:35.825199  268505 api_server.go:131] duration metric: took 7.098891ms to wait for apiserver health ...
	I1109 14:14:35.825210  268505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:14:35.828461  268505 system_pods.go:59] 8 kube-system pods found
	I1109 14:14:35.828489  268505 system_pods.go:61] "coredns-66bc5c9577-4t8ck" [81fbebb0-be83-480a-a495-860724bffcd9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:35.828497  268505 system_pods.go:61] "etcd-auto-593530" [bae16052-4662-45e1-acda-ec3ada44231c] Running
	I1109 14:14:35.828505  268505 system_pods.go:61] "kindnet-g9b75" [7f357927-0462-4d9e-aac9-6dff0d558e57] Running
	I1109 14:14:35.828510  268505 system_pods.go:61] "kube-apiserver-auto-593530" [0ad2c113-c806-465c-afda-fe2484e6bce3] Running
	I1109 14:14:35.828514  268505 system_pods.go:61] "kube-controller-manager-auto-593530" [2a3d8b96-ca1f-44f3-8718-ab9742d798d4] Running
	I1109 14:14:35.828520  268505 system_pods.go:61] "kube-proxy-4mbmw" [7fa94b4b-3c32-483b-9292-2292ff14c906] Running
	I1109 14:14:35.828525  268505 system_pods.go:61] "kube-scheduler-auto-593530" [6c14b671-bc78-406d-b96e-40ed07437e0b] Running
	I1109 14:14:35.828532  268505 system_pods.go:61] "storage-provisioner" [47454088-55f9-4c94-9bea-49bab0c8d429] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:35.828543  268505 system_pods.go:74] duration metric: took 3.326746ms to wait for pod list to return data ...
	I1109 14:14:35.828552  268505 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:14:35.833171  268505 default_sa.go:45] found service account: "default"
	I1109 14:14:35.833194  268505 default_sa.go:55] duration metric: took 4.63348ms for default service account to be created ...
	I1109 14:14:35.833203  268505 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:14:35.836004  268505 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:35.836037  268505 system_pods.go:89] "coredns-66bc5c9577-4t8ck" [81fbebb0-be83-480a-a495-860724bffcd9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:35.836045  268505 system_pods.go:89] "etcd-auto-593530" [bae16052-4662-45e1-acda-ec3ada44231c] Running
	I1109 14:14:35.836067  268505 system_pods.go:89] "kindnet-g9b75" [7f357927-0462-4d9e-aac9-6dff0d558e57] Running
	I1109 14:14:35.836081  268505 system_pods.go:89] "kube-apiserver-auto-593530" [0ad2c113-c806-465c-afda-fe2484e6bce3] Running
	I1109 14:14:35.836087  268505 system_pods.go:89] "kube-controller-manager-auto-593530" [2a3d8b96-ca1f-44f3-8718-ab9742d798d4] Running
	I1109 14:14:35.836098  268505 system_pods.go:89] "kube-proxy-4mbmw" [7fa94b4b-3c32-483b-9292-2292ff14c906] Running
	I1109 14:14:35.836103  268505 system_pods.go:89] "kube-scheduler-auto-593530" [6c14b671-bc78-406d-b96e-40ed07437e0b] Running
	I1109 14:14:35.836117  268505 system_pods.go:89] "storage-provisioner" [47454088-55f9-4c94-9bea-49bab0c8d429] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:35.836151  268505 retry.go:31] will retry after 221.840857ms: missing components: kube-dns
	I1109 14:14:36.063432  268505 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:36.063469  268505 system_pods.go:89] "coredns-66bc5c9577-4t8ck" [81fbebb0-be83-480a-a495-860724bffcd9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:36.063477  268505 system_pods.go:89] "etcd-auto-593530" [bae16052-4662-45e1-acda-ec3ada44231c] Running
	I1109 14:14:36.063484  268505 system_pods.go:89] "kindnet-g9b75" [7f357927-0462-4d9e-aac9-6dff0d558e57] Running
	I1109 14:14:36.063490  268505 system_pods.go:89] "kube-apiserver-auto-593530" [0ad2c113-c806-465c-afda-fe2484e6bce3] Running
	I1109 14:14:36.063494  268505 system_pods.go:89] "kube-controller-manager-auto-593530" [2a3d8b96-ca1f-44f3-8718-ab9742d798d4] Running
	I1109 14:14:36.063499  268505 system_pods.go:89] "kube-proxy-4mbmw" [7fa94b4b-3c32-483b-9292-2292ff14c906] Running
	I1109 14:14:36.063504  268505 system_pods.go:89] "kube-scheduler-auto-593530" [6c14b671-bc78-406d-b96e-40ed07437e0b] Running
	I1109 14:14:36.063511  268505 system_pods.go:89] "storage-provisioner" [47454088-55f9-4c94-9bea-49bab0c8d429] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:36.063527  268505 retry.go:31] will retry after 287.97307ms: missing components: kube-dns
	I1109 14:14:36.355243  268505 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:36.355288  268505 system_pods.go:89] "coredns-66bc5c9577-4t8ck" [81fbebb0-be83-480a-a495-860724bffcd9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:36.355298  268505 system_pods.go:89] "etcd-auto-593530" [bae16052-4662-45e1-acda-ec3ada44231c] Running
	I1109 14:14:36.355305  268505 system_pods.go:89] "kindnet-g9b75" [7f357927-0462-4d9e-aac9-6dff0d558e57] Running
	I1109 14:14:36.355310  268505 system_pods.go:89] "kube-apiserver-auto-593530" [0ad2c113-c806-465c-afda-fe2484e6bce3] Running
	I1109 14:14:36.355316  268505 system_pods.go:89] "kube-controller-manager-auto-593530" [2a3d8b96-ca1f-44f3-8718-ab9742d798d4] Running
	I1109 14:14:36.355323  268505 system_pods.go:89] "kube-proxy-4mbmw" [7fa94b4b-3c32-483b-9292-2292ff14c906] Running
	I1109 14:14:36.355328  268505 system_pods.go:89] "kube-scheduler-auto-593530" [6c14b671-bc78-406d-b96e-40ed07437e0b] Running
	I1109 14:14:36.355335  268505 system_pods.go:89] "storage-provisioner" [47454088-55f9-4c94-9bea-49bab0c8d429] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:36.355355  268505 retry.go:31] will retry after 457.71668ms: missing components: kube-dns
	I1109 14:14:36.816945  268505 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:36.816977  268505 system_pods.go:89] "coredns-66bc5c9577-4t8ck" [81fbebb0-be83-480a-a495-860724bffcd9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:36.816983  268505 system_pods.go:89] "etcd-auto-593530" [bae16052-4662-45e1-acda-ec3ada44231c] Running
	I1109 14:14:36.816989  268505 system_pods.go:89] "kindnet-g9b75" [7f357927-0462-4d9e-aac9-6dff0d558e57] Running
	I1109 14:14:36.816992  268505 system_pods.go:89] "kube-apiserver-auto-593530" [0ad2c113-c806-465c-afda-fe2484e6bce3] Running
	I1109 14:14:36.816995  268505 system_pods.go:89] "kube-controller-manager-auto-593530" [2a3d8b96-ca1f-44f3-8718-ab9742d798d4] Running
	I1109 14:14:36.816999  268505 system_pods.go:89] "kube-proxy-4mbmw" [7fa94b4b-3c32-483b-9292-2292ff14c906] Running
	I1109 14:14:36.817001  268505 system_pods.go:89] "kube-scheduler-auto-593530" [6c14b671-bc78-406d-b96e-40ed07437e0b] Running
	I1109 14:14:36.817006  268505 system_pods.go:89] "storage-provisioner" [47454088-55f9-4c94-9bea-49bab0c8d429] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:36.817021  268505 retry.go:31] will retry after 422.194538ms: missing components: kube-dns
	I1109 14:14:37.244097  268505 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:37.244128  268505 system_pods.go:89] "coredns-66bc5c9577-4t8ck" [81fbebb0-be83-480a-a495-860724bffcd9] Running
	I1109 14:14:37.244135  268505 system_pods.go:89] "etcd-auto-593530" [bae16052-4662-45e1-acda-ec3ada44231c] Running
	I1109 14:14:37.244140  268505 system_pods.go:89] "kindnet-g9b75" [7f357927-0462-4d9e-aac9-6dff0d558e57] Running
	I1109 14:14:37.244145  268505 system_pods.go:89] "kube-apiserver-auto-593530" [0ad2c113-c806-465c-afda-fe2484e6bce3] Running
	I1109 14:14:37.244150  268505 system_pods.go:89] "kube-controller-manager-auto-593530" [2a3d8b96-ca1f-44f3-8718-ab9742d798d4] Running
	I1109 14:14:37.244156  268505 system_pods.go:89] "kube-proxy-4mbmw" [7fa94b4b-3c32-483b-9292-2292ff14c906] Running
	I1109 14:14:37.244163  268505 system_pods.go:89] "kube-scheduler-auto-593530" [6c14b671-bc78-406d-b96e-40ed07437e0b] Running
	I1109 14:14:37.244168  268505 system_pods.go:89] "storage-provisioner" [47454088-55f9-4c94-9bea-49bab0c8d429] Running
	I1109 14:14:37.244179  268505 system_pods.go:126] duration metric: took 1.410969005s to wait for k8s-apps to be running ...
	I1109 14:14:37.244189  268505 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:14:37.244238  268505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:14:37.259804  268505 system_svc.go:56] duration metric: took 15.607114ms WaitForService to wait for kubelet
	I1109 14:14:37.259830  268505 kubeadm.go:587] duration metric: took 12.737412837s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:14:37.259896  268505 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:14:37.262679  268505 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:14:37.262707  268505 node_conditions.go:123] node cpu capacity is 8
	I1109 14:14:37.262722  268505 node_conditions.go:105] duration metric: took 2.815329ms to run NodePressure ...
	I1109 14:14:37.262735  268505 start.go:242] waiting for startup goroutines ...
	I1109 14:14:37.262744  268505 start.go:247] waiting for cluster config update ...
	I1109 14:14:37.262753  268505 start.go:256] writing updated cluster config ...
	I1109 14:14:37.262965  268505 ssh_runner.go:195] Run: rm -f paused
	I1109 14:14:37.267115  268505 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:14:37.270893  268505 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4t8ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.274992  268505 pod_ready.go:94] pod "coredns-66bc5c9577-4t8ck" is "Ready"
	I1109 14:14:37.275012  268505 pod_ready.go:86] duration metric: took 4.094749ms for pod "coredns-66bc5c9577-4t8ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.276932  268505 pod_ready.go:83] waiting for pod "etcd-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.280633  268505 pod_ready.go:94] pod "etcd-auto-593530" is "Ready"
	I1109 14:14:37.280664  268505 pod_ready.go:86] duration metric: took 3.711388ms for pod "etcd-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.282434  268505 pod_ready.go:83] waiting for pod "kube-apiserver-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.286082  268505 pod_ready.go:94] pod "kube-apiserver-auto-593530" is "Ready"
	I1109 14:14:37.286101  268505 pod_ready.go:86] duration metric: took 3.647964ms for pod "kube-apiserver-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.287867  268505 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.671541  268505 pod_ready.go:94] pod "kube-controller-manager-auto-593530" is "Ready"
	I1109 14:14:37.671564  268505 pod_ready.go:86] duration metric: took 383.676466ms for pod "kube-controller-manager-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.871247  268505 pod_ready.go:83] waiting for pod "kube-proxy-4mbmw" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:38.271957  268505 pod_ready.go:94] pod "kube-proxy-4mbmw" is "Ready"
	I1109 14:14:38.271985  268505 pod_ready.go:86] duration metric: took 400.714602ms for pod "kube-proxy-4mbmw" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:38.472184  268505 pod_ready.go:83] waiting for pod "kube-scheduler-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:38.872048  268505 pod_ready.go:94] pod "kube-scheduler-auto-593530" is "Ready"
	I1109 14:14:38.872075  268505 pod_ready.go:86] duration metric: took 399.867482ms for pod "kube-scheduler-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:38.872090  268505 pod_ready.go:40] duration metric: took 1.604942921s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:14:38.918703  268505 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:14:38.920127  268505 out.go:179] * Done! kubectl is now configured to use "auto-593530" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.09118484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.093749788Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=d6f08b4a-edb6-4e68-afa7-b69bc0898883 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.094688282Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=95d21a97-eb78-4043-a461-9cc6ede5bd90 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.096845038Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.0972848Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.09742151Z" level=info msg="Ran pod sandbox cc0cd24b536ebf3dcf848522b3503a8e3e0f15df90a8c71e66c3f7899b2a8782 with infra container: kube-system/kindnet-rmtgg/POD" id=d6f08b4a-edb6-4e68-afa7-b69bc0898883 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.098067623Z" level=info msg="Ran pod sandbox 55654b58185b3cf601321e539b1a36261545aaaaa5b010c436d6bb1ea0c890fc with infra container: kube-system/kube-proxy-fkl5q/POD" id=95d21a97-eb78-4043-a461-9cc6ede5bd90 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.098435593Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=393a697e-ed8c-4d44-a11b-ae4a5eb0194c name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.098938445Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d620f04b-6f74-4e80-b646-fe3e7da1f828 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.099337207Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e74ef883-ec38-4e86-bc0d-21a14c372044 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.099707244Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=5513d993-75cb-49b7-8bfe-5a1239407ae1 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.100363648Z" level=info msg="Creating container: kube-system/kindnet-rmtgg/kindnet-cni" id=8e206d2b-f9f1-4295-928f-235b15127906 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.100450554Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.100530731Z" level=info msg="Creating container: kube-system/kube-proxy-fkl5q/kube-proxy" id=6ea10be9-1e85-48c4-ad08-a1c4cd105479 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.100668598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.105539008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.106022654Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.107965109Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.10849806Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.132499758Z" level=info msg="Created container 257f5a6c7120e71163fe8842e15b1e1f938f4b961f83889600976d21dafdf587: kube-system/kindnet-rmtgg/kindnet-cni" id=8e206d2b-f9f1-4295-928f-235b15127906 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.132965581Z" level=info msg="Starting container: 257f5a6c7120e71163fe8842e15b1e1f938f4b961f83889600976d21dafdf587" id=e0f21d58-7371-4947-8889-609c61a4ab01 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.134809997Z" level=info msg="Started container" PID=1043 containerID=257f5a6c7120e71163fe8842e15b1e1f938f4b961f83889600976d21dafdf587 description=kube-system/kindnet-rmtgg/kindnet-cni id=e0f21d58-7371-4947-8889-609c61a4ab01 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cc0cd24b536ebf3dcf848522b3503a8e3e0f15df90a8c71e66c3f7899b2a8782
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.136225544Z" level=info msg="Created container c1bcd1c1af2c6c8b907c2210ab5673952c2685a93862ca3a5da288dfe9071753: kube-system/kube-proxy-fkl5q/kube-proxy" id=6ea10be9-1e85-48c4-ad08-a1c4cd105479 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.136736708Z" level=info msg="Starting container: c1bcd1c1af2c6c8b907c2210ab5673952c2685a93862ca3a5da288dfe9071753" id=68c11189-0d06-4967-a23e-5e07ddc9b332 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.13970551Z" level=info msg="Started container" PID=1044 containerID=c1bcd1c1af2c6c8b907c2210ab5673952c2685a93862ca3a5da288dfe9071753 description=kube-system/kube-proxy-fkl5q/kube-proxy id=68c11189-0d06-4967-a23e-5e07ddc9b332 name=/runtime.v1.RuntimeService/StartContainer sandboxID=55654b58185b3cf601321e539b1a36261545aaaaa5b010c436d6bb1ea0c890fc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c1bcd1c1af2c6       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   55654b58185b3       kube-proxy-fkl5q                            kube-system
	257f5a6c7120e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   cc0cd24b536eb       kindnet-rmtgg                               kube-system
	d22c955680998       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   af0bb622a1107       kube-controller-manager-newest-cni-331530   kube-system
	1526f0e319d49       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   0263c0975135f       kube-apiserver-newest-cni-331530            kube-system
	b19747f4b9829       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   d191777186e49       kube-scheduler-newest-cni-331530            kube-system
	6f8a1e6423bae       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   1bb238ada2919       etcd-newest-cni-331530                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-331530
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-331530
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=newest-cni-331530
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_14_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:14:03 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-331530
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:14:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:14:36 +0000   Sun, 09 Nov 2025 14:14:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:14:36 +0000   Sun, 09 Nov 2025 14:14:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:14:36 +0000   Sun, 09 Nov 2025 14:14:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 09 Nov 2025 14:14:36 +0000   Sun, 09 Nov 2025 14:14:01 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-331530
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                7aba5339-7922-4c58-b653-e5c31d75079c
	  Boot ID:                    f61b57f8-a461-4dd6-b921-37a11bd88f0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-331530                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         36s
	  kube-system                 kindnet-rmtgg                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-newest-cni-331530             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-newest-cni-331530    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-fkl5q                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-newest-cni-331530             100m (1%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s (x8 over 41s)  kubelet          Node newest-cni-331530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x8 over 41s)  kubelet          Node newest-cni-331530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x8 over 41s)  kubelet          Node newest-cni-331530 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    36s                kubelet          Node newest-cni-331530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  36s                kubelet          Node newest-cni-331530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     36s                kubelet          Node newest-cni-331530 status is now: NodeHasSufficientPID
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           31s                node-controller  Node newest-cni-331530 event: Registered Node newest-cni-331530 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node newest-cni-331530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node newest-cni-331530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x8 over 8s)    kubelet          Node newest-cni-331530 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2s                 node-controller  Node newest-cni-331530 event: Registered Node newest-cni-331530 in Controller
	
	
	==> dmesg <==
	[  +0.090313] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.571631] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 9 13:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.019189] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023897] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +2.047759] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +4.031604] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +8.447123] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[ +16.382306] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 13:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	
	
	==> etcd [6f8a1e6423baec933fe43d33cd67337b3c300a7ea659a7a6c65748bb714c0940] <==
	{"level":"warn","ts":"2025-11-09T14:14:35.541820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.556526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.562815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.568826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.576030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.583512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.590184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.597848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.606077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.618849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.625315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.631433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.637595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.643678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.649427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.655304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.661456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.667801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.674197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.680425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.687142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.703307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.709830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.715632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.766036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38104","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:14:42 up 57 min,  0 user,  load average: 5.35, 3.70, 2.23
	Linux newest-cni-331530 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [257f5a6c7120e71163fe8842e15b1e1f938f4b961f83889600976d21dafdf587] <==
	I1109 14:14:37.396112       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:14:37.396325       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1109 14:14:37.396416       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:14:37.396430       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:14:37.396442       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:14:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:14:37.596124       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:14:37.596187       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:14:37.596200       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:14:37.596439       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1109 14:14:37.896257       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:14:37.896280       1 metrics.go:72] Registering metrics
	I1109 14:14:37.896341       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [1526f0e319d499e8e117ed7e56f754a23db76f71937a68ae2f523503033a0707] <==
	I1109 14:14:36.251226       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 14:14:36.251466       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:14:36.252656       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 14:14:36.252757       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1109 14:14:36.252797       1 aggregator.go:171] initial CRD sync complete...
	I1109 14:14:36.252811       1 autoregister_controller.go:144] Starting autoregister controller
	I1109 14:14:36.252817       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:14:36.252823       1 cache.go:39] Caches are synced for autoregister controller
	E1109 14:14:36.259328       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1109 14:14:36.259822       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1109 14:14:36.261304       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1109 14:14:36.261323       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1109 14:14:36.286418       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:14:36.297790       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:14:36.497931       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:14:36.522430       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:14:36.537415       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:14:36.543819       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:14:36.549333       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:14:36.577404       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.14.57"}
	I1109 14:14:36.585667       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.31.141"}
	I1109 14:14:37.153268       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:14:39.808810       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:14:40.010501       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:14:40.059774       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d22c955680998e8d3360bcb96663b899551daca46c0a944cde9776fb80dea642] <==
	I1109 14:14:39.604253       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1109 14:14:39.604266       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 14:14:39.604345       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1109 14:14:39.604465       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1109 14:14:39.604778       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1109 14:14:39.604804       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1109 14:14:39.605565       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 14:14:39.605735       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 14:14:39.608124       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 14:14:39.609312       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1109 14:14:39.610483       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:14:39.610497       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:14:39.610506       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 14:14:39.610696       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:14:39.611584       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 14:14:39.612872       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 14:14:39.614376       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 14:14:39.621590       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1109 14:14:39.624840       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1109 14:14:39.624974       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 14:14:39.625062       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-331530"
	I1109 14:14:39.625111       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1109 14:14:39.627268       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1109 14:14:39.628156       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 14:14:39.631369       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [c1bcd1c1af2c6c8b907c2210ab5673952c2685a93862ca3a5da288dfe9071753] <==
	I1109 14:14:37.174846       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:14:37.234471       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:14:37.334586       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:14:37.334619       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1109 14:14:37.334771       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:14:37.354134       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:14:37.354175       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:14:37.358832       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:14:37.359146       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:14:37.359165       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:14:37.360545       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:14:37.360571       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:14:37.360672       1 config.go:200] "Starting service config controller"
	I1109 14:14:37.360699       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:14:37.360722       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:14:37.360727       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:14:37.360762       1 config.go:309] "Starting node config controller"
	I1109 14:14:37.360774       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:14:37.360782       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:14:37.460792       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:14:37.460792       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1109 14:14:37.460822       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b19747f4b9829ec6cfd1c55b73ac36adb17b5d764e1893d2e89babe3fddbf0d6] <==
	I1109 14:14:35.015985       1 serving.go:386] Generated self-signed cert in-memory
	W1109 14:14:36.197884       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 14:14:36.197931       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 14:14:36.197945       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 14:14:36.197963       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 14:14:36.214265       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 14:14:36.214287       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:14:36.217139       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:14:36.217165       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:14:36.217179       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:14:36.217236       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 14:14:36.317618       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.186508     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.290354     670 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.290500     670 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.290539     670 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.291577     670 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: E1109 14:14:36.295180     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-331530\" already exists" pod="kube-system/kube-scheduler-newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.295210     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: E1109 14:14:36.301166     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-331530\" already exists" pod="kube-system/etcd-newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.301206     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: E1109 14:14:36.307178     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-331530\" already exists" pod="kube-system/kube-apiserver-newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.307209     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: E1109 14:14:36.311987     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-331530\" already exists" pod="kube-system/kube-controller-manager-newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.783807     670 apiserver.go:52] "Watching apiserver"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.823361     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: E1109 14:14:36.828043     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-331530\" already exists" pod="kube-system/etcd-newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.886567     670 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.893616     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/faf18639-aeb9-4b17-bb1d-32e85cf54dce-xtables-lock\") pod \"kube-proxy-fkl5q\" (UID: \"faf18639-aeb9-4b17-bb1d-32e85cf54dce\") " pod="kube-system/kube-proxy-fkl5q"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.893739     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/59572d13-2d29-4a86-bf1d-e75d0dd0d43c-cni-cfg\") pod \"kindnet-rmtgg\" (UID: \"59572d13-2d29-4a86-bf1d-e75d0dd0d43c\") " pod="kube-system/kindnet-rmtgg"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.893770     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59572d13-2d29-4a86-bf1d-e75d0dd0d43c-xtables-lock\") pod \"kindnet-rmtgg\" (UID: \"59572d13-2d29-4a86-bf1d-e75d0dd0d43c\") " pod="kube-system/kindnet-rmtgg"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.893819     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/faf18639-aeb9-4b17-bb1d-32e85cf54dce-lib-modules\") pod \"kube-proxy-fkl5q\" (UID: \"faf18639-aeb9-4b17-bb1d-32e85cf54dce\") " pod="kube-system/kube-proxy-fkl5q"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.893844     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59572d13-2d29-4a86-bf1d-e75d0dd0d43c-lib-modules\") pod \"kindnet-rmtgg\" (UID: \"59572d13-2d29-4a86-bf1d-e75d0dd0d43c\") " pod="kube-system/kindnet-rmtgg"
	Nov 09 14:14:38 newest-cni-331530 kubelet[670]: I1109 14:14:38.654491     670 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 09 14:14:38 newest-cni-331530 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:14:38 newest-cni-331530 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:14:38 newest-cni-331530 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-331530 -n newest-cni-331530
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-331530 -n newest-cni-331530: exit status 2 (321.559178ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-331530 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-xvlhm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-4hrdd kubernetes-dashboard-855c9754f9-2q22q
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-331530 describe pod coredns-66bc5c9577-xvlhm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-4hrdd kubernetes-dashboard-855c9754f9-2q22q
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-331530 describe pod coredns-66bc5c9577-xvlhm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-4hrdd kubernetes-dashboard-855c9754f9-2q22q: exit status 1 (66.527541ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-xvlhm" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-4hrdd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-2q22q" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-331530 describe pod coredns-66bc5c9577-xvlhm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-4hrdd kubernetes-dashboard-855c9754f9-2q22q: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-331530
helpers_test.go:243: (dbg) docker inspect newest-cni-331530:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa",
	        "Created": "2025-11-09T14:13:46.9311742Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274740,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:14:27.457777089Z",
	            "FinishedAt": "2025-11-09T14:14:26.538788189Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa/hostname",
	        "HostsPath": "/var/lib/docker/containers/b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa/hosts",
	        "LogPath": "/var/lib/docker/containers/b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa/b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa-json.log",
	        "Name": "/newest-cni-331530",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-331530:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-331530",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b0c3dbe7b9b7d1392a5e0ce09f7cbc07a68f2936e9e3b6568eef7471330a9dfa",
	                "LowerDir": "/var/lib/docker/overlay2/270c7912b32411650ef3bf9dcee1cc3fe1f7e282c01edb904b3e13882047bd17-init/diff:/var/lib/docker/overlay2/d0c6d4b4ffbfb6af64ec3408030fc54111fe30d500088a5ae947d598cfc72f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/270c7912b32411650ef3bf9dcee1cc3fe1f7e282c01edb904b3e13882047bd17/merged",
	                "UpperDir": "/var/lib/docker/overlay2/270c7912b32411650ef3bf9dcee1cc3fe1f7e282c01edb904b3e13882047bd17/diff",
	                "WorkDir": "/var/lib/docker/overlay2/270c7912b32411650ef3bf9dcee1cc3fe1f7e282c01edb904b3e13882047bd17/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-331530",
	                "Source": "/var/lib/docker/volumes/newest-cni-331530/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-331530",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-331530",
	                "name.minikube.sigs.k8s.io": "newest-cni-331530",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5e7537594de818bc57f9aaf12d5c94b2d2df242669b2f7c8c1f28c07a9c1c340",
	            "SandboxKey": "/var/run/docker/netns/5e7537594de8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-331530": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:fa:4a:50:07:0e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "48111e278cbe43aa4a69b8079dbb61289459a16d778ee4d9d738546cd26897c8",
	                    "EndpointID": "4438a4d5c91cb0524395802703be6c94ad8e0e7cca8dcd7fcec700410aa59570",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-331530",
	                        "b0c3dbe7b9b7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-331530 -n newest-cni-331530
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-331530 -n newest-cni-331530: exit status 2 (305.624363ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-331530 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-565545                                                                                                                                                                                                               │ disable-driver-mounts-565545 │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p default-k8s-diff-port-326524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:14 UTC │
	│ addons  │ enable metrics-server -p embed-certs-273180 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ stop    │ -p embed-certs-273180 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ image   │ old-k8s-version-169816 image list --format=json                                                                                                                                                                                               │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ pause   │ -p old-k8s-version-169816 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ delete  │ -p old-k8s-version-169816                                                                                                                                                                                                                     │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p old-k8s-version-169816                                                                                                                                                                                                                     │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p newest-cni-331530 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:14 UTC │
	│ image   │ no-preload-152932 image list --format=json                                                                                                                                                                                                    │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ pause   │ -p no-preload-152932 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-273180 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p embed-certs-273180 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:14 UTC │
	│ delete  │ -p no-preload-152932                                                                                                                                                                                                                          │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p no-preload-152932                                                                                                                                                                                                                          │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p auto-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:14 UTC │
	│ addons  │ enable metrics-server -p newest-cni-331530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	│ stop    │ -p newest-cni-331530 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ addons  │ enable dashboard -p newest-cni-331530 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ start   │ -p newest-cni-331530 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ image   │ newest-cni-331530 image list --format=json                                                                                                                                                                                                    │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ pause   │ -p newest-cni-331530 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-326524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	│ ssh     │ -p auto-593530 pgrep -a kubelet                                                                                                                                                                                                               │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ stop    │ -p default-k8s-diff-port-326524 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:14:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:14:27.213300  274539 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:14:27.213403  274539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:14:27.213415  274539 out.go:374] Setting ErrFile to fd 2...
	I1109 14:14:27.213421  274539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:14:27.213711  274539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:14:27.214223  274539 out.go:368] Setting JSON to false
	I1109 14:14:27.215819  274539 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3417,"bootTime":1762694250,"procs":336,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:14:27.215915  274539 start.go:143] virtualization: kvm guest
	I1109 14:14:27.221200  274539 out.go:179] * [newest-cni-331530] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:14:27.222608  274539 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:14:27.222670  274539 notify.go:221] Checking for updates...
	I1109 14:14:27.225558  274539 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:14:27.226971  274539 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:14:27.228145  274539 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 14:14:27.229214  274539 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:14:27.230272  274539 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:14:27.231916  274539 config.go:182] Loaded profile config "newest-cni-331530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:14:27.232624  274539 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:14:27.261484  274539 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 14:14:27.261617  274539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:14:27.321108  274539 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-09 14:14:27.311595109 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:14:27.321203  274539 docker.go:319] overlay module found
	I1109 14:14:27.322616  274539 out.go:179] * Using the docker driver based on existing profile
	I1109 14:14:27.323699  274539 start.go:309] selected driver: docker
	I1109 14:14:27.323718  274539 start.go:930] validating driver "docker" against &{Name:newest-cni-331530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-331530 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:14:27.323819  274539 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:14:27.324328  274539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:14:27.383541  274539 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-09 14:14:27.37426016 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:14:27.383948  274539 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1109 14:14:27.383994  274539 cni.go:84] Creating CNI manager for ""
	I1109 14:14:27.384056  274539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:14:27.384102  274539 start.go:353] cluster config:
	{Name:newest-cni-331530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-331530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:14:27.385676  274539 out.go:179] * Starting "newest-cni-331530" primary control-plane node in "newest-cni-331530" cluster
	I1109 14:14:27.387354  274539 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:14:27.388465  274539 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:14:27.389520  274539 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:14:27.389549  274539 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 14:14:27.389567  274539 cache.go:65] Caching tarball of preloaded images
	I1109 14:14:27.389604  274539 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:14:27.389678  274539 preload.go:238] Found /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 14:14:27.389697  274539 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:14:27.389810  274539 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/config.json ...
	I1109 14:14:27.411564  274539 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:14:27.411584  274539 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:14:27.411603  274539 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:14:27.411628  274539 start.go:360] acquireMachinesLock for newest-cni-331530: {Name:mk7b6183552a57a627a0de774642a3a4314af43c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:14:27.411719  274539 start.go:364] duration metric: took 46.418µs to acquireMachinesLock for "newest-cni-331530"
	I1109 14:14:27.411741  274539 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:14:27.411750  274539 fix.go:54] fixHost starting: 
	I1109 14:14:27.411979  274539 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:27.429672  274539 fix.go:112] recreateIfNeeded on newest-cni-331530: state=Stopped err=<nil>
	W1109 14:14:27.429710  274539 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:14:25.001006  268505 addons.go:515] duration metric: took 478.538847ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1109 14:14:25.306166  268505 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-593530" context rescaled to 1 replicas
	W1109 14:14:26.806326  268505 node_ready.go:57] node "auto-593530" has "Ready":"False" status (will retry)
	W1109 14:14:28.806731  268505 node_ready.go:57] node "auto-593530" has "Ready":"False" status (will retry)
	I1109 14:14:27.273728  256773 node_ready.go:49] node "default-k8s-diff-port-326524" is "Ready"
	I1109 14:14:27.273755  256773 node_ready.go:38] duration metric: took 40.502705469s for node "default-k8s-diff-port-326524" to be "Ready" ...
	I1109 14:14:27.273773  256773 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:14:27.273823  256773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:14:27.288739  256773 api_server.go:72] duration metric: took 40.987914742s to wait for apiserver process to appear ...
	I1109 14:14:27.288767  256773 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:14:27.288790  256773 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1109 14:14:27.293996  256773 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1109 14:14:27.294990  256773 api_server.go:141] control plane version: v1.34.1
	I1109 14:14:27.295010  256773 api_server.go:131] duration metric: took 6.236021ms to wait for apiserver health ...
	I1109 14:14:27.295018  256773 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:14:27.298258  256773 system_pods.go:59] 8 kube-system pods found
	I1109 14:14:27.298294  256773 system_pods.go:61] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:27.298303  256773 system_pods.go:61] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running
	I1109 14:14:27.298310  256773 system_pods.go:61] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running
	I1109 14:14:27.298316  256773 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running
	I1109 14:14:27.298324  256773 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running
	I1109 14:14:27.298333  256773 system_pods.go:61] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running
	I1109 14:14:27.298339  256773 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running
	I1109 14:14:27.298346  256773 system_pods.go:61] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:27.298358  256773 system_pods.go:74] duration metric: took 3.333122ms to wait for pod list to return data ...
	I1109 14:14:27.298370  256773 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:14:27.300717  256773 default_sa.go:45] found service account: "default"
	I1109 14:14:27.300736  256773 default_sa.go:55] duration metric: took 2.360756ms for default service account to be created ...
	I1109 14:14:27.300745  256773 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:14:27.304529  256773 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:27.304592  256773 system_pods.go:89] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:27.304601  256773 system_pods.go:89] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running
	I1109 14:14:27.304615  256773 system_pods.go:89] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running
	I1109 14:14:27.304624  256773 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running
	I1109 14:14:27.304629  256773 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running
	I1109 14:14:27.304634  256773 system_pods.go:89] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running
	I1109 14:14:27.304656  256773 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running
	I1109 14:14:27.304665  256773 system_pods.go:89] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:27.304688  256773 retry.go:31] will retry after 236.393087ms: missing components: kube-dns
	I1109 14:14:27.545158  256773 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:27.545217  256773 system_pods.go:89] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:27.545229  256773 system_pods.go:89] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running
	I1109 14:14:27.545238  256773 system_pods.go:89] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running
	I1109 14:14:27.545244  256773 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running
	I1109 14:14:27.545278  256773 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running
	I1109 14:14:27.545288  256773 system_pods.go:89] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running
	I1109 14:14:27.545294  256773 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running
	I1109 14:14:27.545305  256773 system_pods.go:89] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:27.545324  256773 retry.go:31] will retry after 241.871609ms: missing components: kube-dns
	I1109 14:14:27.792009  256773 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:27.792039  256773 system_pods.go:89] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:27.792046  256773 system_pods.go:89] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running
	I1109 14:14:27.792052  256773 system_pods.go:89] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running
	I1109 14:14:27.792055  256773 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running
	I1109 14:14:27.792059  256773 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running
	I1109 14:14:27.792062  256773 system_pods.go:89] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running
	I1109 14:14:27.792066  256773 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running
	I1109 14:14:27.792071  256773 system_pods.go:89] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:27.792109  256773 retry.go:31] will retry after 430.689591ms: missing components: kube-dns
	I1109 14:14:28.226855  256773 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:28.226889  256773 system_pods.go:89] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:28.226897  256773 system_pods.go:89] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running
	I1109 14:14:28.226906  256773 system_pods.go:89] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running
	I1109 14:14:28.226913  256773 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running
	I1109 14:14:28.226926  256773 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running
	I1109 14:14:28.226931  256773 system_pods.go:89] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running
	I1109 14:14:28.226936  256773 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running
	I1109 14:14:28.226953  256773 system_pods.go:89] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:28.226976  256773 retry.go:31] will retry after 511.736387ms: missing components: kube-dns
	I1109 14:14:28.742716  256773 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:28.742741  256773 system_pods.go:89] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Running
	I1109 14:14:28.742746  256773 system_pods.go:89] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running
	I1109 14:14:28.742759  256773 system_pods.go:89] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running
	I1109 14:14:28.742763  256773 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running
	I1109 14:14:28.742767  256773 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running
	I1109 14:14:28.742770  256773 system_pods.go:89] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running
	I1109 14:14:28.742773  256773 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running
	I1109 14:14:28.742776  256773 system_pods.go:89] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Running
	I1109 14:14:28.742784  256773 system_pods.go:126] duration metric: took 1.442032955s to wait for k8s-apps to be running ...
	I1109 14:14:28.742793  256773 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:14:28.742832  256773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:14:28.755945  256773 system_svc.go:56] duration metric: took 13.142064ms WaitForService to wait for kubelet
	I1109 14:14:28.755970  256773 kubeadm.go:587] duration metric: took 42.455149759s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:14:28.755990  256773 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:14:28.758414  256773 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:14:28.758439  256773 node_conditions.go:123] node cpu capacity is 8
	I1109 14:14:28.758458  256773 node_conditions.go:105] duration metric: took 2.459628ms to run NodePressure ...
	I1109 14:14:28.758473  256773 start.go:242] waiting for startup goroutines ...
	I1109 14:14:28.758487  256773 start.go:247] waiting for cluster config update ...
	I1109 14:14:28.758503  256773 start.go:256] writing updated cluster config ...
	I1109 14:14:28.758756  256773 ssh_runner.go:195] Run: rm -f paused
	I1109 14:14:28.762372  256773 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:14:28.765747  256773 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z8lkx" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:28.769875  256773 pod_ready.go:94] pod "coredns-66bc5c9577-z8lkx" is "Ready"
	I1109 14:14:28.769897  256773 pod_ready.go:86] duration metric: took 4.124753ms for pod "coredns-66bc5c9577-z8lkx" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:28.771797  256773 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:28.775231  256773 pod_ready.go:94] pod "etcd-default-k8s-diff-port-326524" is "Ready"
	I1109 14:14:28.775252  256773 pod_ready.go:86] duration metric: took 3.433428ms for pod "etcd-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:28.777156  256773 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:28.780365  256773 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-326524" is "Ready"
	I1109 14:14:28.780382  256773 pod_ready.go:86] duration metric: took 3.206399ms for pod "kube-apiserver-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:28.782110  256773 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:29.166343  256773 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-326524" is "Ready"
	I1109 14:14:29.166367  256773 pod_ready.go:86] duration metric: took 384.238658ms for pod "kube-controller-manager-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:29.366197  256773 pod_ready.go:83] waiting for pod "kube-proxy-n95wb" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:29.766136  256773 pod_ready.go:94] pod "kube-proxy-n95wb" is "Ready"
	I1109 14:14:29.766157  256773 pod_ready.go:86] duration metric: took 399.937804ms for pod "kube-proxy-n95wb" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:29.966186  256773 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:30.366783  256773 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-326524" is "Ready"
	I1109 14:14:30.366806  256773 pod_ready.go:86] duration metric: took 400.591526ms for pod "kube-scheduler-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:30.366817  256773 pod_ready.go:40] duration metric: took 1.604418075s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:14:30.409181  256773 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:14:30.411042  256773 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-326524" cluster and "default" namespace by default
	W1109 14:14:28.006842  264151 pod_ready.go:104] pod "coredns-66bc5c9577-bbnm4" is not "Ready", error: <nil>
	W1109 14:14:30.504933  264151 pod_ready.go:104] pod "coredns-66bc5c9577-bbnm4" is not "Ready", error: <nil>
	I1109 14:14:27.431740  274539 out.go:252] * Restarting existing docker container for "newest-cni-331530" ...
	I1109 14:14:27.431804  274539 cli_runner.go:164] Run: docker start newest-cni-331530
	I1109 14:14:27.723911  274539 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:27.742256  274539 kic.go:430] container "newest-cni-331530" state is running.
	I1109 14:14:27.742606  274539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-331530
	I1109 14:14:27.761864  274539 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/config.json ...
	I1109 14:14:27.762097  274539 machine.go:94] provisionDockerMachine start ...
	I1109 14:14:27.762181  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:27.781171  274539 main.go:143] libmachine: Using SSH client type: native
	I1109 14:14:27.781382  274539 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1109 14:14:27.781392  274539 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:14:27.782070  274539 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51370->127.0.0.1:33100: read: connection reset by peer
	I1109 14:14:30.912783  274539 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-331530
	
	I1109 14:14:30.912818  274539 ubuntu.go:182] provisioning hostname "newest-cni-331530"
	I1109 14:14:30.912874  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:30.931520  274539 main.go:143] libmachine: Using SSH client type: native
	I1109 14:14:30.931801  274539 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1109 14:14:30.931833  274539 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-331530 && echo "newest-cni-331530" | sudo tee /etc/hostname
	I1109 14:14:31.065432  274539 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-331530
	
	I1109 14:14:31.065515  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:31.083548  274539 main.go:143] libmachine: Using SSH client type: native
	I1109 14:14:31.083824  274539 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1109 14:14:31.083853  274539 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-331530' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-331530/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-331530' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:14:31.208936  274539 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:14:31.208962  274539 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-5854/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-5854/.minikube}
	I1109 14:14:31.208980  274539 ubuntu.go:190] setting up certificates
	I1109 14:14:31.208988  274539 provision.go:84] configureAuth start
	I1109 14:14:31.209030  274539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-331530
	I1109 14:14:31.228082  274539 provision.go:143] copyHostCerts
	I1109 14:14:31.228132  274539 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem, removing ...
	I1109 14:14:31.228148  274539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem
	I1109 14:14:31.228210  274539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem (1679 bytes)
	I1109 14:14:31.228302  274539 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem, removing ...
	I1109 14:14:31.228311  274539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem
	I1109 14:14:31.228339  274539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem (1078 bytes)
	I1109 14:14:31.228431  274539 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem, removing ...
	I1109 14:14:31.228447  274539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem
	I1109 14:14:31.228477  274539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem (1123 bytes)
	I1109 14:14:31.228542  274539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem org=jenkins.newest-cni-331530 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-331530]
	I1109 14:14:31.677040  274539 provision.go:177] copyRemoteCerts
	I1109 14:14:31.677126  274539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:14:31.677177  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:31.695249  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:31.788325  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 14:14:31.805579  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1109 14:14:31.822478  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:14:31.839249  274539 provision.go:87] duration metric: took 630.251605ms to configureAuth
	I1109 14:14:31.839268  274539 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:14:31.839440  274539 config.go:182] Loaded profile config "newest-cni-331530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:14:31.839545  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:31.858063  274539 main.go:143] libmachine: Using SSH client type: native
	I1109 14:14:31.858358  274539 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1109 14:14:31.858385  274539 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:14:32.123352  274539 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:14:32.123375  274539 machine.go:97] duration metric: took 4.361263068s to provisionDockerMachine
	I1109 14:14:32.123388  274539 start.go:293] postStartSetup for "newest-cni-331530" (driver="docker")
	I1109 14:14:32.123400  274539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:14:32.123449  274539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:14:32.123487  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:32.141670  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:32.233243  274539 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:14:32.236496  274539 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:14:32.236518  274539 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:14:32.236527  274539 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/addons for local assets ...
	I1109 14:14:32.236571  274539 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/files for local assets ...
	I1109 14:14:32.236651  274539 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem -> 93652.pem in /etc/ssl/certs
	I1109 14:14:32.236742  274539 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:14:32.243898  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:14:32.260427  274539 start.go:296] duration metric: took 137.029267ms for postStartSetup
	I1109 14:14:32.260497  274539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:14:32.260537  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:32.278838  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:32.367429  274539 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:14:32.371740  274539 fix.go:56] duration metric: took 4.95998604s for fixHost
	I1109 14:14:32.371766  274539 start.go:83] releasing machines lock for "newest-cni-331530", held for 4.96003446s
	I1109 14:14:32.371820  274539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-331530
	I1109 14:14:32.390330  274539 ssh_runner.go:195] Run: cat /version.json
	I1109 14:14:32.390386  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:32.390407  274539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:14:32.390472  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:32.407704  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:32.408755  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:32.564613  274539 ssh_runner.go:195] Run: systemctl --version
	I1109 14:14:32.571012  274539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:14:32.606920  274539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:14:32.612027  274539 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:14:32.612087  274539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:14:32.620330  274539 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:14:32.620351  274539 start.go:496] detecting cgroup driver to use...
	I1109 14:14:32.620392  274539 detect.go:190] detected "systemd" cgroup driver on host os
	I1109 14:14:32.620432  274539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:14:32.633961  274539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:14:32.645344  274539 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:14:32.645400  274539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:14:32.658618  274539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:14:32.670206  274539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:14:32.749620  274539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:14:32.829965  274539 docker.go:234] disabling docker service ...
	I1109 14:14:32.830022  274539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:14:32.843428  274539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:14:32.854871  274539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:14:32.939565  274539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:14:33.019395  274539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:14:33.031615  274539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:14:33.045402  274539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:14:33.045488  274539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.053882  274539 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1109 14:14:33.053932  274539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.062452  274539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.070568  274539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.078831  274539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:14:33.086248  274539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.094137  274539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.102098  274539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.110267  274539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:14:33.117230  274539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:14:33.123967  274539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:14:33.203147  274539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:14:33.313625  274539 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:14:33.313705  274539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:14:33.317538  274539 start.go:564] Will wait 60s for crictl version
	I1109 14:14:33.317594  274539 ssh_runner.go:195] Run: which crictl
	I1109 14:14:33.321065  274539 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:14:33.345284  274539 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:14:33.345340  274539 ssh_runner.go:195] Run: crio --version
	I1109 14:14:33.371633  274539 ssh_runner.go:195] Run: crio --version
	I1109 14:14:33.400250  274539 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:14:33.401590  274539 cli_runner.go:164] Run: docker network inspect newest-cni-331530 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:14:33.419331  274539 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1109 14:14:33.423271  274539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:14:33.434564  274539 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1109 14:14:32.504467  264151 pod_ready.go:94] pod "coredns-66bc5c9577-bbnm4" is "Ready"
	I1109 14:14:32.504496  264151 pod_ready.go:86] duration metric: took 34.504778764s for pod "coredns-66bc5c9577-bbnm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.506955  264151 pod_ready.go:83] waiting for pod "etcd-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.510570  264151 pod_ready.go:94] pod "etcd-embed-certs-273180" is "Ready"
	I1109 14:14:32.510590  264151 pod_ready.go:86] duration metric: took 3.614216ms for pod "etcd-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.512402  264151 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.515898  264151 pod_ready.go:94] pod "kube-apiserver-embed-certs-273180" is "Ready"
	I1109 14:14:32.515921  264151 pod_ready.go:86] duration metric: took 3.495327ms for pod "kube-apiserver-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.517532  264151 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.703898  264151 pod_ready.go:94] pod "kube-controller-manager-embed-certs-273180" is "Ready"
	I1109 14:14:32.703925  264151 pod_ready.go:86] duration metric: took 186.376206ms for pod "kube-controller-manager-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.902976  264151 pod_ready.go:83] waiting for pod "kube-proxy-k6zsl" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:33.303238  264151 pod_ready.go:94] pod "kube-proxy-k6zsl" is "Ready"
	I1109 14:14:33.303266  264151 pod_ready.go:86] duration metric: took 400.264059ms for pod "kube-proxy-k6zsl" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:33.503415  264151 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:33.903284  264151 pod_ready.go:94] pod "kube-scheduler-embed-certs-273180" is "Ready"
	I1109 14:14:33.903309  264151 pod_ready.go:86] duration metric: took 399.863623ms for pod "kube-scheduler-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:33.903322  264151 pod_ready.go:40] duration metric: took 35.907389797s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:14:33.951503  264151 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:14:33.954226  264151 out.go:179] * Done! kubectl is now configured to use "embed-certs-273180" cluster and "default" namespace by default
	W1109 14:14:31.306521  268505 node_ready.go:57] node "auto-593530" has "Ready":"False" status (will retry)
	W1109 14:14:33.807024  268505 node_ready.go:57] node "auto-593530" has "Ready":"False" status (will retry)
	I1109 14:14:33.435777  274539 kubeadm.go:884] updating cluster {Name:newest-cni-331530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-331530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:14:33.436335  274539 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:14:33.436448  274539 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:14:33.468543  274539 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:14:33.468564  274539 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:14:33.468605  274539 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:14:33.493302  274539 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:14:33.493321  274539 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:14:33.493331  274539 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1109 14:14:33.493431  274539 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-331530 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-331530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:14:33.493502  274539 ssh_runner.go:195] Run: crio config
	I1109 14:14:33.539072  274539 cni.go:84] Creating CNI manager for ""
	I1109 14:14:33.539095  274539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:14:33.539111  274539 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1109 14:14:33.539142  274539 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-331530 NodeName:newest-cni-331530 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:14:33.539279  274539 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-331530"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:14:33.539348  274539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:14:33.547453  274539 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:14:33.547513  274539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:14:33.554838  274539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1109 14:14:33.567355  274539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:14:33.579294  274539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1109 14:14:33.590819  274539 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:14:33.594197  274539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:14:33.603341  274539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:14:33.683905  274539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:14:33.721780  274539 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530 for IP: 192.168.76.2
	I1109 14:14:33.721801  274539 certs.go:195] generating shared ca certs ...
	I1109 14:14:33.721820  274539 certs.go:227] acquiring lock for ca certs: {Name:mk1fbc7fb5aaf0e87090d75145f91c095ef07289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:33.721968  274539 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key
	I1109 14:14:33.722021  274539 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key
	I1109 14:14:33.722032  274539 certs.go:257] generating profile certs ...
	I1109 14:14:33.722135  274539 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/client.key
	I1109 14:14:33.722199  274539 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.key.5fb0b4cb
	I1109 14:14:33.722252  274539 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/proxy-client.key
	I1109 14:14:33.722385  274539 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem (1338 bytes)
	W1109 14:14:33.722438  274539 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365_empty.pem, impossibly tiny 0 bytes
	I1109 14:14:33.722453  274539 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:14:33.722488  274539 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 14:14:33.722523  274539 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:14:33.722555  274539 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 14:14:33.722611  274539 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:14:33.723238  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:14:33.742105  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:14:33.760423  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:14:33.780350  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:14:33.804462  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1109 14:14:33.822573  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:14:33.838810  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:14:33.856050  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 14:14:33.873441  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /usr/share/ca-certificates/93652.pem (1708 bytes)
	I1109 14:14:33.889777  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:14:33.907245  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem --> /usr/share/ca-certificates/9365.pem (1338 bytes)
	I1109 14:14:33.927298  274539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:14:33.939424  274539 ssh_runner.go:195] Run: openssl version
	I1109 14:14:33.945995  274539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9365.pem && ln -fs /usr/share/ca-certificates/9365.pem /etc/ssl/certs/9365.pem"
	I1109 14:14:33.954688  274539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9365.pem
	I1109 14:14:33.958715  274539 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:34 /usr/share/ca-certificates/9365.pem
	I1109 14:14:33.958770  274539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9365.pem
	I1109 14:14:33.997357  274539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9365.pem /etc/ssl/certs/51391683.0"
	I1109 14:14:34.005827  274539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93652.pem && ln -fs /usr/share/ca-certificates/93652.pem /etc/ssl/certs/93652.pem"
	I1109 14:14:34.015472  274539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93652.pem
	I1109 14:14:34.019587  274539 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:34 /usr/share/ca-certificates/93652.pem
	I1109 14:14:34.019636  274539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93652.pem
	I1109 14:14:34.057846  274539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93652.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:14:34.066728  274539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:14:34.074987  274539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:14:34.078819  274539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:14:34.078868  274539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:14:34.114498  274539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:14:34.122522  274539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:14:34.126262  274539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:14:34.160486  274539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:14:34.196162  274539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:14:34.241808  274539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:14:34.290982  274539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:14:34.342552  274539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:14:34.396435  274539 kubeadm.go:401] StartCluster: {Name:newest-cni-331530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-331530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:14:34.396545  274539 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:14:34.396609  274539 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:14:34.431828  274539 cri.go:89] found id: "d22c955680998e8d3360bcb96663b899551daca46c0a944cde9776fb80dea642"
	I1109 14:14:34.431852  274539 cri.go:89] found id: "1526f0e319d499e8e117ed7e56f754a23db76f71937a68ae2f523503033a0707"
	I1109 14:14:34.431858  274539 cri.go:89] found id: "b19747f4b9829ec6cfd1c55b73ac36adb17b5d764e1893d2e89babe3fddbf0d6"
	I1109 14:14:34.431862  274539 cri.go:89] found id: "6f8a1e6423baec933fe43d33cd67337b3c300a7ea659a7a6c65748bb714c0940"
	I1109 14:14:34.431867  274539 cri.go:89] found id: ""
	I1109 14:14:34.431909  274539 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 14:14:34.443922  274539 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:14:34Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:14:34.443987  274539 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:14:34.451921  274539 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:14:34.451945  274539 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:14:34.451985  274539 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:14:34.459577  274539 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:14:34.460472  274539 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-331530" does not appear in /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:14:34.461194  274539 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-5854/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-331530" cluster setting kubeconfig missing "newest-cni-331530" context setting]
	I1109 14:14:34.462308  274539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:34.463959  274539 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:14:34.471814  274539 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1109 14:14:34.471839  274539 kubeadm.go:602] duration metric: took 19.88831ms to restartPrimaryControlPlane
	I1109 14:14:34.471851  274539 kubeadm.go:403] duration metric: took 75.424288ms to StartCluster
	I1109 14:14:34.471865  274539 settings.go:142] acquiring lock: {Name:mk4e77b290eba64589404bd2a5f48c72505ab262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:34.471929  274539 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:14:34.474250  274539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:34.474493  274539 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:14:34.474569  274539 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:14:34.474679  274539 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-331530"
	I1109 14:14:34.474698  274539 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-331530"
	W1109 14:14:34.474707  274539 addons.go:248] addon storage-provisioner should already be in state true
	I1109 14:14:34.474710  274539 addons.go:70] Setting dashboard=true in profile "newest-cni-331530"
	I1109 14:14:34.474733  274539 addons.go:239] Setting addon dashboard=true in "newest-cni-331530"
	I1109 14:14:34.474741  274539 host.go:66] Checking if "newest-cni-331530" exists ...
	W1109 14:14:34.474742  274539 addons.go:248] addon dashboard should already be in state true
	I1109 14:14:34.474741  274539 addons.go:70] Setting default-storageclass=true in profile "newest-cni-331530"
	I1109 14:14:34.474766  274539 config.go:182] Loaded profile config "newest-cni-331530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:14:34.474768  274539 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-331530"
	I1109 14:14:34.474772  274539 host.go:66] Checking if "newest-cni-331530" exists ...
	I1109 14:14:34.475110  274539 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:34.475311  274539 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:34.475379  274539 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:34.476931  274539 out.go:179] * Verifying Kubernetes components...
	I1109 14:14:34.478118  274539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:14:34.498669  274539 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1109 14:14:34.499967  274539 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:14:34.501000  274539 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1109 14:14:34.501055  274539 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:14:34.501069  274539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:14:34.501108  274539 addons.go:239] Setting addon default-storageclass=true in "newest-cni-331530"
	I1109 14:14:34.501118  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	W1109 14:14:34.501127  274539 addons.go:248] addon default-storageclass should already be in state true
	I1109 14:14:34.501153  274539 host.go:66] Checking if "newest-cni-331530" exists ...
	I1109 14:14:34.501594  274539 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:34.501929  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1109 14:14:34.501945  274539 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1109 14:14:34.501995  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:34.535820  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:34.537755  274539 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:14:34.537777  274539 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:14:34.537828  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:34.541015  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:34.559704  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:34.613162  274539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:14:34.625580  274539 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:14:34.625672  274539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:14:34.636706  274539 api_server.go:72] duration metric: took 162.184344ms to wait for apiserver process to appear ...
	I1109 14:14:34.636730  274539 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:14:34.636748  274539 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:14:34.645161  274539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:14:34.648499  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1109 14:14:34.648519  274539 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1109 14:14:34.661828  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1109 14:14:34.661852  274539 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1109 14:14:34.666411  274539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:14:34.675831  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1109 14:14:34.675849  274539 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1109 14:14:34.690000  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1109 14:14:34.690016  274539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1109 14:14:34.705252  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1109 14:14:34.705272  274539 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1109 14:14:34.719515  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1109 14:14:34.719540  274539 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1109 14:14:34.732790  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1109 14:14:34.732819  274539 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1109 14:14:34.745667  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1109 14:14:34.745693  274539 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1109 14:14:34.757973  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:14:34.757995  274539 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1109 14:14:34.770563  274539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:14:36.171483  274539 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1109 14:14:36.171528  274539 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1109 14:14:36.171551  274539 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:14:36.194157  274539 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1109 14:14:36.194201  274539 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1109 14:14:36.637271  274539 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:14:36.641625  274539 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:14:36.641685  274539 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:14:36.682804  274539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.037614109s)
	I1109 14:14:36.682849  274539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.016394225s)
	I1109 14:14:36.682940  274539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.912351833s)
	I1109 14:14:36.684411  274539 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-331530 addons enable metrics-server
	
	I1109 14:14:36.692971  274539 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1109 14:14:36.694141  274539 addons.go:515] duration metric: took 2.219578634s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1109 14:14:37.137025  274539 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:14:37.141940  274539 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:14:37.141967  274539 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:14:37.637235  274539 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:14:37.641207  274539 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1109 14:14:37.642117  274539 api_server.go:141] control plane version: v1.34.1
	I1109 14:14:37.642139  274539 api_server.go:131] duration metric: took 3.005402836s to wait for apiserver health ...
	I1109 14:14:37.642147  274539 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:14:37.645738  274539 system_pods.go:59] 8 kube-system pods found
	I1109 14:14:37.645773  274539 system_pods.go:61] "coredns-66bc5c9577-xvlhm" [ab5d6559-9c58-477e-bae9-e4cedcc2832e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1109 14:14:37.645784  274539 system_pods.go:61] "etcd-newest-cni-331530" [3508f193-5b63-49b0-bbc3-f94d167d8b0c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:14:37.645797  274539 system_pods.go:61] "kindnet-rmtgg" [59572d13-2d29-4a86-bf1d-e75d0dd0d43c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:14:37.645806  274539 system_pods.go:61] "kube-apiserver-newest-cni-331530" [d47aa681-ce72-491a-847d-050b27ac3607] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:14:37.645818  274539 system_pods.go:61] "kube-controller-manager-newest-cni-331530" [ad70a3ff-3aba-485c-a18a-79b65fb30455] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:14:37.645827  274539 system_pods.go:61] "kube-proxy-fkl5q" [faf18639-aeb9-4b17-bb1d-32e85cf54dce] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1109 14:14:37.645837  274539 system_pods.go:61] "kube-scheduler-newest-cni-331530" [e5a8d839-20ee-4400-81dd-abcc742b5c2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:14:37.645846  274539 system_pods.go:61] "storage-provisioner" [77fd8da7-bb4b-4c95-beb6-7d28e7eaabbb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1109 14:14:37.645854  274539 system_pods.go:74] duration metric: took 3.700675ms to wait for pod list to return data ...
	I1109 14:14:37.645865  274539 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:14:37.647993  274539 default_sa.go:45] found service account: "default"
	I1109 14:14:37.648014  274539 default_sa.go:55] duration metric: took 2.143672ms for default service account to be created ...
	I1109 14:14:37.648025  274539 kubeadm.go:587] duration metric: took 3.173506607s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1109 14:14:37.648041  274539 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:14:37.650075  274539 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:14:37.650093  274539 node_conditions.go:123] node cpu capacity is 8
	I1109 14:14:37.650104  274539 node_conditions.go:105] duration metric: took 2.058154ms to run NodePressure ...
	I1109 14:14:37.650116  274539 start.go:242] waiting for startup goroutines ...
	I1109 14:14:37.650129  274539 start.go:247] waiting for cluster config update ...
	I1109 14:14:37.650142  274539 start.go:256] writing updated cluster config ...
	I1109 14:14:37.650382  274539 ssh_runner.go:195] Run: rm -f paused
	I1109 14:14:37.702340  274539 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:14:37.703684  274539 out.go:179] * Done! kubectl is now configured to use "newest-cni-331530" cluster and "default" namespace by default
	I1109 14:14:35.806075  268505 node_ready.go:49] node "auto-593530" is "Ready"
	I1109 14:14:35.806106  268505 node_ready.go:38] duration metric: took 11.002864775s for node "auto-593530" to be "Ready" ...
	I1109 14:14:35.806123  268505 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:14:35.806179  268505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:14:35.818063  268505 api_server.go:72] duration metric: took 11.295640515s to wait for apiserver process to appear ...
	I1109 14:14:35.818095  268505 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:14:35.818111  268505 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1109 14:14:35.823855  268505 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1109 14:14:35.825169  268505 api_server.go:141] control plane version: v1.34.1
	I1109 14:14:35.825199  268505 api_server.go:131] duration metric: took 7.098891ms to wait for apiserver health ...
	I1109 14:14:35.825210  268505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:14:35.828461  268505 system_pods.go:59] 8 kube-system pods found
	I1109 14:14:35.828489  268505 system_pods.go:61] "coredns-66bc5c9577-4t8ck" [81fbebb0-be83-480a-a495-860724bffcd9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:35.828497  268505 system_pods.go:61] "etcd-auto-593530" [bae16052-4662-45e1-acda-ec3ada44231c] Running
	I1109 14:14:35.828505  268505 system_pods.go:61] "kindnet-g9b75" [7f357927-0462-4d9e-aac9-6dff0d558e57] Running
	I1109 14:14:35.828510  268505 system_pods.go:61] "kube-apiserver-auto-593530" [0ad2c113-c806-465c-afda-fe2484e6bce3] Running
	I1109 14:14:35.828514  268505 system_pods.go:61] "kube-controller-manager-auto-593530" [2a3d8b96-ca1f-44f3-8718-ab9742d798d4] Running
	I1109 14:14:35.828520  268505 system_pods.go:61] "kube-proxy-4mbmw" [7fa94b4b-3c32-483b-9292-2292ff14c906] Running
	I1109 14:14:35.828525  268505 system_pods.go:61] "kube-scheduler-auto-593530" [6c14b671-bc78-406d-b96e-40ed07437e0b] Running
	I1109 14:14:35.828532  268505 system_pods.go:61] "storage-provisioner" [47454088-55f9-4c94-9bea-49bab0c8d429] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:35.828543  268505 system_pods.go:74] duration metric: took 3.326746ms to wait for pod list to return data ...
	I1109 14:14:35.828552  268505 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:14:35.833171  268505 default_sa.go:45] found service account: "default"
	I1109 14:14:35.833194  268505 default_sa.go:55] duration metric: took 4.63348ms for default service account to be created ...
	I1109 14:14:35.833203  268505 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:14:35.836004  268505 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:35.836037  268505 system_pods.go:89] "coredns-66bc5c9577-4t8ck" [81fbebb0-be83-480a-a495-860724bffcd9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:35.836045  268505 system_pods.go:89] "etcd-auto-593530" [bae16052-4662-45e1-acda-ec3ada44231c] Running
	I1109 14:14:35.836067  268505 system_pods.go:89] "kindnet-g9b75" [7f357927-0462-4d9e-aac9-6dff0d558e57] Running
	I1109 14:14:35.836081  268505 system_pods.go:89] "kube-apiserver-auto-593530" [0ad2c113-c806-465c-afda-fe2484e6bce3] Running
	I1109 14:14:35.836087  268505 system_pods.go:89] "kube-controller-manager-auto-593530" [2a3d8b96-ca1f-44f3-8718-ab9742d798d4] Running
	I1109 14:14:35.836098  268505 system_pods.go:89] "kube-proxy-4mbmw" [7fa94b4b-3c32-483b-9292-2292ff14c906] Running
	I1109 14:14:35.836103  268505 system_pods.go:89] "kube-scheduler-auto-593530" [6c14b671-bc78-406d-b96e-40ed07437e0b] Running
	I1109 14:14:35.836117  268505 system_pods.go:89] "storage-provisioner" [47454088-55f9-4c94-9bea-49bab0c8d429] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:35.836151  268505 retry.go:31] will retry after 221.840857ms: missing components: kube-dns
	I1109 14:14:36.063432  268505 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:36.063469  268505 system_pods.go:89] "coredns-66bc5c9577-4t8ck" [81fbebb0-be83-480a-a495-860724bffcd9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:36.063477  268505 system_pods.go:89] "etcd-auto-593530" [bae16052-4662-45e1-acda-ec3ada44231c] Running
	I1109 14:14:36.063484  268505 system_pods.go:89] "kindnet-g9b75" [7f357927-0462-4d9e-aac9-6dff0d558e57] Running
	I1109 14:14:36.063490  268505 system_pods.go:89] "kube-apiserver-auto-593530" [0ad2c113-c806-465c-afda-fe2484e6bce3] Running
	I1109 14:14:36.063494  268505 system_pods.go:89] "kube-controller-manager-auto-593530" [2a3d8b96-ca1f-44f3-8718-ab9742d798d4] Running
	I1109 14:14:36.063499  268505 system_pods.go:89] "kube-proxy-4mbmw" [7fa94b4b-3c32-483b-9292-2292ff14c906] Running
	I1109 14:14:36.063504  268505 system_pods.go:89] "kube-scheduler-auto-593530" [6c14b671-bc78-406d-b96e-40ed07437e0b] Running
	I1109 14:14:36.063511  268505 system_pods.go:89] "storage-provisioner" [47454088-55f9-4c94-9bea-49bab0c8d429] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:36.063527  268505 retry.go:31] will retry after 287.97307ms: missing components: kube-dns
	I1109 14:14:36.355243  268505 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:36.355288  268505 system_pods.go:89] "coredns-66bc5c9577-4t8ck" [81fbebb0-be83-480a-a495-860724bffcd9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:36.355298  268505 system_pods.go:89] "etcd-auto-593530" [bae16052-4662-45e1-acda-ec3ada44231c] Running
	I1109 14:14:36.355305  268505 system_pods.go:89] "kindnet-g9b75" [7f357927-0462-4d9e-aac9-6dff0d558e57] Running
	I1109 14:14:36.355310  268505 system_pods.go:89] "kube-apiserver-auto-593530" [0ad2c113-c806-465c-afda-fe2484e6bce3] Running
	I1109 14:14:36.355316  268505 system_pods.go:89] "kube-controller-manager-auto-593530" [2a3d8b96-ca1f-44f3-8718-ab9742d798d4] Running
	I1109 14:14:36.355323  268505 system_pods.go:89] "kube-proxy-4mbmw" [7fa94b4b-3c32-483b-9292-2292ff14c906] Running
	I1109 14:14:36.355328  268505 system_pods.go:89] "kube-scheduler-auto-593530" [6c14b671-bc78-406d-b96e-40ed07437e0b] Running
	I1109 14:14:36.355335  268505 system_pods.go:89] "storage-provisioner" [47454088-55f9-4c94-9bea-49bab0c8d429] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:36.355355  268505 retry.go:31] will retry after 457.71668ms: missing components: kube-dns
	I1109 14:14:36.816945  268505 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:36.816977  268505 system_pods.go:89] "coredns-66bc5c9577-4t8ck" [81fbebb0-be83-480a-a495-860724bffcd9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:36.816983  268505 system_pods.go:89] "etcd-auto-593530" [bae16052-4662-45e1-acda-ec3ada44231c] Running
	I1109 14:14:36.816989  268505 system_pods.go:89] "kindnet-g9b75" [7f357927-0462-4d9e-aac9-6dff0d558e57] Running
	I1109 14:14:36.816992  268505 system_pods.go:89] "kube-apiserver-auto-593530" [0ad2c113-c806-465c-afda-fe2484e6bce3] Running
	I1109 14:14:36.816995  268505 system_pods.go:89] "kube-controller-manager-auto-593530" [2a3d8b96-ca1f-44f3-8718-ab9742d798d4] Running
	I1109 14:14:36.816999  268505 system_pods.go:89] "kube-proxy-4mbmw" [7fa94b4b-3c32-483b-9292-2292ff14c906] Running
	I1109 14:14:36.817001  268505 system_pods.go:89] "kube-scheduler-auto-593530" [6c14b671-bc78-406d-b96e-40ed07437e0b] Running
	I1109 14:14:36.817006  268505 system_pods.go:89] "storage-provisioner" [47454088-55f9-4c94-9bea-49bab0c8d429] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:36.817021  268505 retry.go:31] will retry after 422.194538ms: missing components: kube-dns
	I1109 14:14:37.244097  268505 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:37.244128  268505 system_pods.go:89] "coredns-66bc5c9577-4t8ck" [81fbebb0-be83-480a-a495-860724bffcd9] Running
	I1109 14:14:37.244135  268505 system_pods.go:89] "etcd-auto-593530" [bae16052-4662-45e1-acda-ec3ada44231c] Running
	I1109 14:14:37.244140  268505 system_pods.go:89] "kindnet-g9b75" [7f357927-0462-4d9e-aac9-6dff0d558e57] Running
	I1109 14:14:37.244145  268505 system_pods.go:89] "kube-apiserver-auto-593530" [0ad2c113-c806-465c-afda-fe2484e6bce3] Running
	I1109 14:14:37.244150  268505 system_pods.go:89] "kube-controller-manager-auto-593530" [2a3d8b96-ca1f-44f3-8718-ab9742d798d4] Running
	I1109 14:14:37.244156  268505 system_pods.go:89] "kube-proxy-4mbmw" [7fa94b4b-3c32-483b-9292-2292ff14c906] Running
	I1109 14:14:37.244163  268505 system_pods.go:89] "kube-scheduler-auto-593530" [6c14b671-bc78-406d-b96e-40ed07437e0b] Running
	I1109 14:14:37.244168  268505 system_pods.go:89] "storage-provisioner" [47454088-55f9-4c94-9bea-49bab0c8d429] Running
	I1109 14:14:37.244179  268505 system_pods.go:126] duration metric: took 1.410969005s to wait for k8s-apps to be running ...
	I1109 14:14:37.244189  268505 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:14:37.244238  268505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:14:37.259804  268505 system_svc.go:56] duration metric: took 15.607114ms WaitForService to wait for kubelet
	I1109 14:14:37.259830  268505 kubeadm.go:587] duration metric: took 12.737412837s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:14:37.259896  268505 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:14:37.262679  268505 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:14:37.262707  268505 node_conditions.go:123] node cpu capacity is 8
	I1109 14:14:37.262722  268505 node_conditions.go:105] duration metric: took 2.815329ms to run NodePressure ...
	I1109 14:14:37.262735  268505 start.go:242] waiting for startup goroutines ...
	I1109 14:14:37.262744  268505 start.go:247] waiting for cluster config update ...
	I1109 14:14:37.262753  268505 start.go:256] writing updated cluster config ...
	I1109 14:14:37.262965  268505 ssh_runner.go:195] Run: rm -f paused
	I1109 14:14:37.267115  268505 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:14:37.270893  268505 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4t8ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.274992  268505 pod_ready.go:94] pod "coredns-66bc5c9577-4t8ck" is "Ready"
	I1109 14:14:37.275012  268505 pod_ready.go:86] duration metric: took 4.094749ms for pod "coredns-66bc5c9577-4t8ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.276932  268505 pod_ready.go:83] waiting for pod "etcd-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.280633  268505 pod_ready.go:94] pod "etcd-auto-593530" is "Ready"
	I1109 14:14:37.280664  268505 pod_ready.go:86] duration metric: took 3.711388ms for pod "etcd-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.282434  268505 pod_ready.go:83] waiting for pod "kube-apiserver-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.286082  268505 pod_ready.go:94] pod "kube-apiserver-auto-593530" is "Ready"
	I1109 14:14:37.286101  268505 pod_ready.go:86] duration metric: took 3.647964ms for pod "kube-apiserver-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.287867  268505 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.671541  268505 pod_ready.go:94] pod "kube-controller-manager-auto-593530" is "Ready"
	I1109 14:14:37.671564  268505 pod_ready.go:86] duration metric: took 383.676466ms for pod "kube-controller-manager-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.871247  268505 pod_ready.go:83] waiting for pod "kube-proxy-4mbmw" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:38.271957  268505 pod_ready.go:94] pod "kube-proxy-4mbmw" is "Ready"
	I1109 14:14:38.271985  268505 pod_ready.go:86] duration metric: took 400.714602ms for pod "kube-proxy-4mbmw" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:38.472184  268505 pod_ready.go:83] waiting for pod "kube-scheduler-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:38.872048  268505 pod_ready.go:94] pod "kube-scheduler-auto-593530" is "Ready"
	I1109 14:14:38.872075  268505 pod_ready.go:86] duration metric: took 399.867482ms for pod "kube-scheduler-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:38.872090  268505 pod_ready.go:40] duration metric: took 1.604942921s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:14:38.918703  268505 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:14:38.920127  268505 out.go:179] * Done! kubectl is now configured to use "auto-593530" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.09118484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.093749788Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=d6f08b4a-edb6-4e68-afa7-b69bc0898883 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.094688282Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=95d21a97-eb78-4043-a461-9cc6ede5bd90 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.096845038Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.0972848Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.09742151Z" level=info msg="Ran pod sandbox cc0cd24b536ebf3dcf848522b3503a8e3e0f15df90a8c71e66c3f7899b2a8782 with infra container: kube-system/kindnet-rmtgg/POD" id=d6f08b4a-edb6-4e68-afa7-b69bc0898883 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.098067623Z" level=info msg="Ran pod sandbox 55654b58185b3cf601321e539b1a36261545aaaaa5b010c436d6bb1ea0c890fc with infra container: kube-system/kube-proxy-fkl5q/POD" id=95d21a97-eb78-4043-a461-9cc6ede5bd90 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.098435593Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=393a697e-ed8c-4d44-a11b-ae4a5eb0194c name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.098938445Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d620f04b-6f74-4e80-b646-fe3e7da1f828 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.099337207Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e74ef883-ec38-4e86-bc0d-21a14c372044 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.099707244Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=5513d993-75cb-49b7-8bfe-5a1239407ae1 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.100363648Z" level=info msg="Creating container: kube-system/kindnet-rmtgg/kindnet-cni" id=8e206d2b-f9f1-4295-928f-235b15127906 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.100450554Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.100530731Z" level=info msg="Creating container: kube-system/kube-proxy-fkl5q/kube-proxy" id=6ea10be9-1e85-48c4-ad08-a1c4cd105479 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.100668598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.105539008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.106022654Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.107965109Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.10849806Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.132499758Z" level=info msg="Created container 257f5a6c7120e71163fe8842e15b1e1f938f4b961f83889600976d21dafdf587: kube-system/kindnet-rmtgg/kindnet-cni" id=8e206d2b-f9f1-4295-928f-235b15127906 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.132965581Z" level=info msg="Starting container: 257f5a6c7120e71163fe8842e15b1e1f938f4b961f83889600976d21dafdf587" id=e0f21d58-7371-4947-8889-609c61a4ab01 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.134809997Z" level=info msg="Started container" PID=1043 containerID=257f5a6c7120e71163fe8842e15b1e1f938f4b961f83889600976d21dafdf587 description=kube-system/kindnet-rmtgg/kindnet-cni id=e0f21d58-7371-4947-8889-609c61a4ab01 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cc0cd24b536ebf3dcf848522b3503a8e3e0f15df90a8c71e66c3f7899b2a8782
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.136225544Z" level=info msg="Created container c1bcd1c1af2c6c8b907c2210ab5673952c2685a93862ca3a5da288dfe9071753: kube-system/kube-proxy-fkl5q/kube-proxy" id=6ea10be9-1e85-48c4-ad08-a1c4cd105479 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.136736708Z" level=info msg="Starting container: c1bcd1c1af2c6c8b907c2210ab5673952c2685a93862ca3a5da288dfe9071753" id=68c11189-0d06-4967-a23e-5e07ddc9b332 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:14:37 newest-cni-331530 crio[521]: time="2025-11-09T14:14:37.13970551Z" level=info msg="Started container" PID=1044 containerID=c1bcd1c1af2c6c8b907c2210ab5673952c2685a93862ca3a5da288dfe9071753 description=kube-system/kube-proxy-fkl5q/kube-proxy id=68c11189-0d06-4967-a23e-5e07ddc9b332 name=/runtime.v1.RuntimeService/StartContainer sandboxID=55654b58185b3cf601321e539b1a36261545aaaaa5b010c436d6bb1ea0c890fc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c1bcd1c1af2c6       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   55654b58185b3       kube-proxy-fkl5q                            kube-system
	257f5a6c7120e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   cc0cd24b536eb       kindnet-rmtgg                               kube-system
	d22c955680998       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 seconds ago       Running             kube-controller-manager   1                   af0bb622a1107       kube-controller-manager-newest-cni-331530   kube-system
	1526f0e319d49       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 seconds ago       Running             kube-apiserver            1                   0263c0975135f       kube-apiserver-newest-cni-331530            kube-system
	b19747f4b9829       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 seconds ago       Running             kube-scheduler            1                   d191777186e49       kube-scheduler-newest-cni-331530            kube-system
	6f8a1e6423bae       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 seconds ago       Running             etcd                      1                   1bb238ada2919       etcd-newest-cni-331530                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-331530
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-331530
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=newest-cni-331530
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_14_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:14:03 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-331530
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:14:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:14:36 +0000   Sun, 09 Nov 2025 14:14:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:14:36 +0000   Sun, 09 Nov 2025 14:14:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:14:36 +0000   Sun, 09 Nov 2025 14:14:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 09 Nov 2025 14:14:36 +0000   Sun, 09 Nov 2025 14:14:01 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-331530
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                7aba5339-7922-4c58-b653-e5c31d75079c
	  Boot ID:                    f61b57f8-a461-4dd6-b921-37a11bd88f0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-331530                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-rmtgg                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      32s
	  kube-system                 kube-apiserver-newest-cni-331530             250m (3%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-newest-cni-331530    200m (2%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-fkl5q                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-newest-cni-331530             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 31s                kube-proxy       
	  Normal  Starting                 6s                 kube-proxy       
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node newest-cni-331530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node newest-cni-331530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x8 over 43s)  kubelet          Node newest-cni-331530 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    38s                kubelet          Node newest-cni-331530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  38s                kubelet          Node newest-cni-331530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     38s                kubelet          Node newest-cni-331530 status is now: NodeHasSufficientPID
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           33s                node-controller  Node newest-cni-331530 event: Registered Node newest-cni-331530 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x8 over 10s)  kubelet          Node newest-cni-331530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x8 over 10s)  kubelet          Node newest-cni-331530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x8 over 10s)  kubelet          Node newest-cni-331530 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-331530 event: Registered Node newest-cni-331530 in Controller
	
	
	==> dmesg <==
	[  +0.090313] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.571631] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 9 13:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.019189] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023897] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +2.047759] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +4.031604] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +8.447123] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[ +16.382306] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 13:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	
	
	==> etcd [6f8a1e6423baec933fe43d33cd67337b3c300a7ea659a7a6c65748bb714c0940] <==
	{"level":"warn","ts":"2025-11-09T14:14:35.541820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.556526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.562815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.568826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.576030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.583512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.590184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.597848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.606077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.618849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.625315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.631433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.637595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.643678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.649427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.655304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.661456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.667801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.674197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.680425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.687142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.703307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.709830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.715632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:14:35.766036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38104","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:14:43 up 57 min,  0 user,  load average: 5.35, 3.70, 2.23
	Linux newest-cni-331530 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [257f5a6c7120e71163fe8842e15b1e1f938f4b961f83889600976d21dafdf587] <==
	I1109 14:14:37.396112       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:14:37.396325       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1109 14:14:37.396416       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:14:37.396430       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:14:37.396442       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:14:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:14:37.596124       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:14:37.596187       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:14:37.596200       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:14:37.596439       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1109 14:14:37.896257       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:14:37.896280       1 metrics.go:72] Registering metrics
	I1109 14:14:37.896341       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [1526f0e319d499e8e117ed7e56f754a23db76f71937a68ae2f523503033a0707] <==
	I1109 14:14:36.251226       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 14:14:36.251466       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:14:36.252656       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 14:14:36.252757       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1109 14:14:36.252797       1 aggregator.go:171] initial CRD sync complete...
	I1109 14:14:36.252811       1 autoregister_controller.go:144] Starting autoregister controller
	I1109 14:14:36.252817       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:14:36.252823       1 cache.go:39] Caches are synced for autoregister controller
	E1109 14:14:36.259328       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1109 14:14:36.259822       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1109 14:14:36.261304       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1109 14:14:36.261323       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1109 14:14:36.286418       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:14:36.297790       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:14:36.497931       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:14:36.522430       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:14:36.537415       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:14:36.543819       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:14:36.549333       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:14:36.577404       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.14.57"}
	I1109 14:14:36.585667       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.31.141"}
	I1109 14:14:37.153268       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:14:39.808810       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:14:40.010501       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:14:40.059774       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d22c955680998e8d3360bcb96663b899551daca46c0a944cde9776fb80dea642] <==
	I1109 14:14:39.604253       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1109 14:14:39.604266       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 14:14:39.604345       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1109 14:14:39.604465       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1109 14:14:39.604778       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1109 14:14:39.604804       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1109 14:14:39.605565       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 14:14:39.605735       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 14:14:39.608124       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 14:14:39.609312       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1109 14:14:39.610483       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:14:39.610497       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:14:39.610506       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 14:14:39.610696       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:14:39.611584       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 14:14:39.612872       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 14:14:39.614376       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 14:14:39.621590       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1109 14:14:39.624840       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1109 14:14:39.624974       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 14:14:39.625062       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-331530"
	I1109 14:14:39.625111       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1109 14:14:39.627268       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1109 14:14:39.628156       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 14:14:39.631369       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [c1bcd1c1af2c6c8b907c2210ab5673952c2685a93862ca3a5da288dfe9071753] <==
	I1109 14:14:37.174846       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:14:37.234471       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:14:37.334586       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:14:37.334619       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1109 14:14:37.334771       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:14:37.354134       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:14:37.354175       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:14:37.358832       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:14:37.359146       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:14:37.359165       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:14:37.360545       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:14:37.360571       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:14:37.360672       1 config.go:200] "Starting service config controller"
	I1109 14:14:37.360699       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:14:37.360722       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:14:37.360727       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:14:37.360762       1 config.go:309] "Starting node config controller"
	I1109 14:14:37.360774       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:14:37.360782       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:14:37.460792       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:14:37.460792       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1109 14:14:37.460822       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b19747f4b9829ec6cfd1c55b73ac36adb17b5d764e1893d2e89babe3fddbf0d6] <==
	I1109 14:14:35.015985       1 serving.go:386] Generated self-signed cert in-memory
	W1109 14:14:36.197884       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 14:14:36.197931       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 14:14:36.197945       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 14:14:36.197963       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 14:14:36.214265       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 14:14:36.214287       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:14:36.217139       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:14:36.217165       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:14:36.217179       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:14:36.217236       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 14:14:36.317618       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.186508     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.290354     670 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.290500     670 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.290539     670 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.291577     670 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: E1109 14:14:36.295180     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-331530\" already exists" pod="kube-system/kube-scheduler-newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.295210     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: E1109 14:14:36.301166     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-331530\" already exists" pod="kube-system/etcd-newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.301206     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: E1109 14:14:36.307178     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-331530\" already exists" pod="kube-system/kube-apiserver-newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.307209     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: E1109 14:14:36.311987     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-331530\" already exists" pod="kube-system/kube-controller-manager-newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.783807     670 apiserver.go:52] "Watching apiserver"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.823361     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: E1109 14:14:36.828043     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-331530\" already exists" pod="kube-system/etcd-newest-cni-331530"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.886567     670 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.893616     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/faf18639-aeb9-4b17-bb1d-32e85cf54dce-xtables-lock\") pod \"kube-proxy-fkl5q\" (UID: \"faf18639-aeb9-4b17-bb1d-32e85cf54dce\") " pod="kube-system/kube-proxy-fkl5q"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.893739     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/59572d13-2d29-4a86-bf1d-e75d0dd0d43c-cni-cfg\") pod \"kindnet-rmtgg\" (UID: \"59572d13-2d29-4a86-bf1d-e75d0dd0d43c\") " pod="kube-system/kindnet-rmtgg"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.893770     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59572d13-2d29-4a86-bf1d-e75d0dd0d43c-xtables-lock\") pod \"kindnet-rmtgg\" (UID: \"59572d13-2d29-4a86-bf1d-e75d0dd0d43c\") " pod="kube-system/kindnet-rmtgg"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.893819     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/faf18639-aeb9-4b17-bb1d-32e85cf54dce-lib-modules\") pod \"kube-proxy-fkl5q\" (UID: \"faf18639-aeb9-4b17-bb1d-32e85cf54dce\") " pod="kube-system/kube-proxy-fkl5q"
	Nov 09 14:14:36 newest-cni-331530 kubelet[670]: I1109 14:14:36.893844     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59572d13-2d29-4a86-bf1d-e75d0dd0d43c-lib-modules\") pod \"kindnet-rmtgg\" (UID: \"59572d13-2d29-4a86-bf1d-e75d0dd0d43c\") " pod="kube-system/kindnet-rmtgg"
	Nov 09 14:14:38 newest-cni-331530 kubelet[670]: I1109 14:14:38.654491     670 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 09 14:14:38 newest-cni-331530 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:14:38 newest-cni-331530 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:14:38 newest-cni-331530 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-331530 -n newest-cni-331530
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-331530 -n newest-cni-331530: exit status 2 (315.488885ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-331530 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-xvlhm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-4hrdd kubernetes-dashboard-855c9754f9-2q22q
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-331530 describe pod coredns-66bc5c9577-xvlhm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-4hrdd kubernetes-dashboard-855c9754f9-2q22q
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-331530 describe pod coredns-66bc5c9577-xvlhm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-4hrdd kubernetes-dashboard-855c9754f9-2q22q: exit status 1 (59.430841ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-xvlhm" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-4hrdd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-2q22q" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-331530 describe pod coredns-66bc5c9577-xvlhm storage-provisioner dashboard-metrics-scraper-6ffb444bf9-4hrdd kubernetes-dashboard-855c9754f9-2q22q: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-326524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-326524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (251.752524ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:14:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-326524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-326524 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-326524 describe deploy/metrics-server -n kube-system: exit status 1 (63.794491ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-326524 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-326524
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-326524:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9",
	        "Created": "2025-11-09T14:13:22.347253658Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 257552,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:13:22.379186598Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9/hostname",
	        "HostsPath": "/var/lib/docker/containers/4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9/hosts",
	        "LogPath": "/var/lib/docker/containers/4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9/4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9-json.log",
	        "Name": "/default-k8s-diff-port-326524",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-326524:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-326524",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9",
	                "LowerDir": "/var/lib/docker/overlay2/6090cd65b9b7b71056ab21b51f3d0835e7a09039168090d26312aabaa0dba784-init/diff:/var/lib/docker/overlay2/d0c6d4b4ffbfb6af64ec3408030fc54111fe30d500088a5ae947d598cfc72f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6090cd65b9b7b71056ab21b51f3d0835e7a09039168090d26312aabaa0dba784/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6090cd65b9b7b71056ab21b51f3d0835e7a09039168090d26312aabaa0dba784/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6090cd65b9b7b71056ab21b51f3d0835e7a09039168090d26312aabaa0dba784/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-326524",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-326524/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-326524",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-326524",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-326524",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b5cfd2adbe0f754ddb46613f71247cbde5286ed5ddf482e9a8190daf983b1b8",
	            "SandboxKey": "/var/run/docker/netns/9b5cfd2adbe0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-326524": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:d3:a4:5b:26:8f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1418d8b0aecfeebbb964747ce9f2239c14745f39f121eb76b984b7589e5562c5",
	                    "EndpointID": "d26889310435eeafb127c05d2e2a7a5da2bb3be34ee26fa5402fc3d3cf806823",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-326524",
	                        "4d5e864b1f2e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-326524 -n default-k8s-diff-port-326524
I1109 14:14:39.251929    9365 config.go:182] Loaded profile config "auto-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-326524 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-326524 logs -n 25: (1.183238136s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p kubernetes-upgrade-755159                                                                                                                                                                                                                  │ kubernetes-upgrade-755159    │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p disable-driver-mounts-565545                                                                                                                                                                                                               │ disable-driver-mounts-565545 │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p default-k8s-diff-port-326524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:14 UTC │
	│ addons  │ enable metrics-server -p embed-certs-273180 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ stop    │ -p embed-certs-273180 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ image   │ old-k8s-version-169816 image list --format=json                                                                                                                                                                                               │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ pause   │ -p old-k8s-version-169816 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ delete  │ -p old-k8s-version-169816                                                                                                                                                                                                                     │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p old-k8s-version-169816                                                                                                                                                                                                                     │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p newest-cni-331530 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:14 UTC │
	│ image   │ no-preload-152932 image list --format=json                                                                                                                                                                                                    │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ pause   │ -p no-preload-152932 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-273180 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p embed-certs-273180 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:14 UTC │
	│ delete  │ -p no-preload-152932                                                                                                                                                                                                                          │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p no-preload-152932                                                                                                                                                                                                                          │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p auto-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:14 UTC │
	│ addons  │ enable metrics-server -p newest-cni-331530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	│ stop    │ -p newest-cni-331530 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ addons  │ enable dashboard -p newest-cni-331530 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ start   │ -p newest-cni-331530 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ image   │ newest-cni-331530 image list --format=json                                                                                                                                                                                                    │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ pause   │ -p newest-cni-331530 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-326524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	│ ssh     │ -p auto-593530 pgrep -a kubelet                                                                                                                                                                                                               │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:14:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:14:27.213300  274539 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:14:27.213403  274539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:14:27.213415  274539 out.go:374] Setting ErrFile to fd 2...
	I1109 14:14:27.213421  274539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:14:27.213711  274539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:14:27.214223  274539 out.go:368] Setting JSON to false
	I1109 14:14:27.215819  274539 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3417,"bootTime":1762694250,"procs":336,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:14:27.215915  274539 start.go:143] virtualization: kvm guest
	I1109 14:14:27.221200  274539 out.go:179] * [newest-cni-331530] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:14:27.222608  274539 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:14:27.222670  274539 notify.go:221] Checking for updates...
	I1109 14:14:27.225558  274539 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:14:27.226971  274539 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:14:27.228145  274539 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 14:14:27.229214  274539 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:14:27.230272  274539 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:14:27.231916  274539 config.go:182] Loaded profile config "newest-cni-331530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:14:27.232624  274539 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:14:27.261484  274539 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 14:14:27.261617  274539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:14:27.321108  274539 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-09 14:14:27.311595109 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:14:27.321203  274539 docker.go:319] overlay module found
	I1109 14:14:27.322616  274539 out.go:179] * Using the docker driver based on existing profile
	I1109 14:14:27.323699  274539 start.go:309] selected driver: docker
	I1109 14:14:27.323718  274539 start.go:930] validating driver "docker" against &{Name:newest-cni-331530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-331530 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:14:27.323819  274539 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:14:27.324328  274539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:14:27.383541  274539 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-09 14:14:27.37426016 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:14:27.383948  274539 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1109 14:14:27.383994  274539 cni.go:84] Creating CNI manager for ""
	I1109 14:14:27.384056  274539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:14:27.384102  274539 start.go:353] cluster config:
	{Name:newest-cni-331530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-331530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:14:27.385676  274539 out.go:179] * Starting "newest-cni-331530" primary control-plane node in "newest-cni-331530" cluster
	I1109 14:14:27.387354  274539 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:14:27.388465  274539 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:14:27.389520  274539 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:14:27.389549  274539 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 14:14:27.389567  274539 cache.go:65] Caching tarball of preloaded images
	I1109 14:14:27.389604  274539 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:14:27.389678  274539 preload.go:238] Found /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 14:14:27.389697  274539 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:14:27.389810  274539 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/config.json ...
	I1109 14:14:27.411564  274539 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:14:27.411584  274539 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:14:27.411603  274539 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:14:27.411628  274539 start.go:360] acquireMachinesLock for newest-cni-331530: {Name:mk7b6183552a57a627a0de774642a3a4314af43c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:14:27.411719  274539 start.go:364] duration metric: took 46.418µs to acquireMachinesLock for "newest-cni-331530"
	I1109 14:14:27.411741  274539 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:14:27.411750  274539 fix.go:54] fixHost starting: 
	I1109 14:14:27.411979  274539 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:27.429672  274539 fix.go:112] recreateIfNeeded on newest-cni-331530: state=Stopped err=<nil>
	W1109 14:14:27.429710  274539 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:14:25.001006  268505 addons.go:515] duration metric: took 478.538847ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1109 14:14:25.306166  268505 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-593530" context rescaled to 1 replicas
	W1109 14:14:26.806326  268505 node_ready.go:57] node "auto-593530" has "Ready":"False" status (will retry)
	W1109 14:14:28.806731  268505 node_ready.go:57] node "auto-593530" has "Ready":"False" status (will retry)
	I1109 14:14:27.273728  256773 node_ready.go:49] node "default-k8s-diff-port-326524" is "Ready"
	I1109 14:14:27.273755  256773 node_ready.go:38] duration metric: took 40.502705469s for node "default-k8s-diff-port-326524" to be "Ready" ...
	I1109 14:14:27.273773  256773 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:14:27.273823  256773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:14:27.288739  256773 api_server.go:72] duration metric: took 40.987914742s to wait for apiserver process to appear ...
	I1109 14:14:27.288767  256773 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:14:27.288790  256773 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1109 14:14:27.293996  256773 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1109 14:14:27.294990  256773 api_server.go:141] control plane version: v1.34.1
	I1109 14:14:27.295010  256773 api_server.go:131] duration metric: took 6.236021ms to wait for apiserver health ...
	I1109 14:14:27.295018  256773 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:14:27.298258  256773 system_pods.go:59] 8 kube-system pods found
	I1109 14:14:27.298294  256773 system_pods.go:61] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:27.298303  256773 system_pods.go:61] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running
	I1109 14:14:27.298310  256773 system_pods.go:61] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running
	I1109 14:14:27.298316  256773 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running
	I1109 14:14:27.298324  256773 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running
	I1109 14:14:27.298333  256773 system_pods.go:61] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running
	I1109 14:14:27.298339  256773 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running
	I1109 14:14:27.298346  256773 system_pods.go:61] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:27.298358  256773 system_pods.go:74] duration metric: took 3.333122ms to wait for pod list to return data ...
	I1109 14:14:27.298370  256773 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:14:27.300717  256773 default_sa.go:45] found service account: "default"
	I1109 14:14:27.300736  256773 default_sa.go:55] duration metric: took 2.360756ms for default service account to be created ...
	I1109 14:14:27.300745  256773 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:14:27.304529  256773 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:27.304592  256773 system_pods.go:89] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:27.304601  256773 system_pods.go:89] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running
	I1109 14:14:27.304615  256773 system_pods.go:89] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running
	I1109 14:14:27.304624  256773 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running
	I1109 14:14:27.304629  256773 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running
	I1109 14:14:27.304634  256773 system_pods.go:89] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running
	I1109 14:14:27.304656  256773 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running
	I1109 14:14:27.304665  256773 system_pods.go:89] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:27.304688  256773 retry.go:31] will retry after 236.393087ms: missing components: kube-dns
	I1109 14:14:27.545158  256773 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:27.545217  256773 system_pods.go:89] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:27.545229  256773 system_pods.go:89] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running
	I1109 14:14:27.545238  256773 system_pods.go:89] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running
	I1109 14:14:27.545244  256773 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running
	I1109 14:14:27.545278  256773 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running
	I1109 14:14:27.545288  256773 system_pods.go:89] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running
	I1109 14:14:27.545294  256773 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running
	I1109 14:14:27.545305  256773 system_pods.go:89] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:27.545324  256773 retry.go:31] will retry after 241.871609ms: missing components: kube-dns
	I1109 14:14:27.792009  256773 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:27.792039  256773 system_pods.go:89] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:27.792046  256773 system_pods.go:89] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running
	I1109 14:14:27.792052  256773 system_pods.go:89] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running
	I1109 14:14:27.792055  256773 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running
	I1109 14:14:27.792059  256773 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running
	I1109 14:14:27.792062  256773 system_pods.go:89] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running
	I1109 14:14:27.792066  256773 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running
	I1109 14:14:27.792071  256773 system_pods.go:89] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:27.792109  256773 retry.go:31] will retry after 430.689591ms: missing components: kube-dns
	I1109 14:14:28.226855  256773 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:28.226889  256773 system_pods.go:89] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:28.226897  256773 system_pods.go:89] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running
	I1109 14:14:28.226906  256773 system_pods.go:89] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running
	I1109 14:14:28.226913  256773 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running
	I1109 14:14:28.226926  256773 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running
	I1109 14:14:28.226931  256773 system_pods.go:89] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running
	I1109 14:14:28.226936  256773 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running
	I1109 14:14:28.226953  256773 system_pods.go:89] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:28.226976  256773 retry.go:31] will retry after 511.736387ms: missing components: kube-dns
	I1109 14:14:28.742716  256773 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:28.742741  256773 system_pods.go:89] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Running
	I1109 14:14:28.742746  256773 system_pods.go:89] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running
	I1109 14:14:28.742759  256773 system_pods.go:89] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running
	I1109 14:14:28.742763  256773 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running
	I1109 14:14:28.742767  256773 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running
	I1109 14:14:28.742770  256773 system_pods.go:89] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running
	I1109 14:14:28.742773  256773 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running
	I1109 14:14:28.742776  256773 system_pods.go:89] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Running
	I1109 14:14:28.742784  256773 system_pods.go:126] duration metric: took 1.442032955s to wait for k8s-apps to be running ...
	I1109 14:14:28.742793  256773 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:14:28.742832  256773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:14:28.755945  256773 system_svc.go:56] duration metric: took 13.142064ms WaitForService to wait for kubelet
	I1109 14:14:28.755970  256773 kubeadm.go:587] duration metric: took 42.455149759s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:14:28.755990  256773 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:14:28.758414  256773 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:14:28.758439  256773 node_conditions.go:123] node cpu capacity is 8
	I1109 14:14:28.758458  256773 node_conditions.go:105] duration metric: took 2.459628ms to run NodePressure ...
	I1109 14:14:28.758473  256773 start.go:242] waiting for startup goroutines ...
	I1109 14:14:28.758487  256773 start.go:247] waiting for cluster config update ...
	I1109 14:14:28.758503  256773 start.go:256] writing updated cluster config ...
	I1109 14:14:28.758756  256773 ssh_runner.go:195] Run: rm -f paused
	I1109 14:14:28.762372  256773 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:14:28.765747  256773 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z8lkx" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:28.769875  256773 pod_ready.go:94] pod "coredns-66bc5c9577-z8lkx" is "Ready"
	I1109 14:14:28.769897  256773 pod_ready.go:86] duration metric: took 4.124753ms for pod "coredns-66bc5c9577-z8lkx" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:28.771797  256773 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:28.775231  256773 pod_ready.go:94] pod "etcd-default-k8s-diff-port-326524" is "Ready"
	I1109 14:14:28.775252  256773 pod_ready.go:86] duration metric: took 3.433428ms for pod "etcd-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:28.777156  256773 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:28.780365  256773 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-326524" is "Ready"
	I1109 14:14:28.780382  256773 pod_ready.go:86] duration metric: took 3.206399ms for pod "kube-apiserver-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:28.782110  256773 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:29.166343  256773 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-326524" is "Ready"
	I1109 14:14:29.166367  256773 pod_ready.go:86] duration metric: took 384.238658ms for pod "kube-controller-manager-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:29.366197  256773 pod_ready.go:83] waiting for pod "kube-proxy-n95wb" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:29.766136  256773 pod_ready.go:94] pod "kube-proxy-n95wb" is "Ready"
	I1109 14:14:29.766157  256773 pod_ready.go:86] duration metric: took 399.937804ms for pod "kube-proxy-n95wb" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:29.966186  256773 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:30.366783  256773 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-326524" is "Ready"
	I1109 14:14:30.366806  256773 pod_ready.go:86] duration metric: took 400.591526ms for pod "kube-scheduler-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:30.366817  256773 pod_ready.go:40] duration metric: took 1.604418075s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:14:30.409181  256773 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:14:30.411042  256773 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-326524" cluster and "default" namespace by default
	W1109 14:14:28.006842  264151 pod_ready.go:104] pod "coredns-66bc5c9577-bbnm4" is not "Ready", error: <nil>
	W1109 14:14:30.504933  264151 pod_ready.go:104] pod "coredns-66bc5c9577-bbnm4" is not "Ready", error: <nil>
	I1109 14:14:27.431740  274539 out.go:252] * Restarting existing docker container for "newest-cni-331530" ...
	I1109 14:14:27.431804  274539 cli_runner.go:164] Run: docker start newest-cni-331530
	I1109 14:14:27.723911  274539 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:27.742256  274539 kic.go:430] container "newest-cni-331530" state is running.
	I1109 14:14:27.742606  274539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-331530
	I1109 14:14:27.761864  274539 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/config.json ...
	I1109 14:14:27.762097  274539 machine.go:94] provisionDockerMachine start ...
	I1109 14:14:27.762181  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:27.781171  274539 main.go:143] libmachine: Using SSH client type: native
	I1109 14:14:27.781382  274539 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1109 14:14:27.781392  274539 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:14:27.782070  274539 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51370->127.0.0.1:33100: read: connection reset by peer
	I1109 14:14:30.912783  274539 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-331530
	
	I1109 14:14:30.912818  274539 ubuntu.go:182] provisioning hostname "newest-cni-331530"
	I1109 14:14:30.912874  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:30.931520  274539 main.go:143] libmachine: Using SSH client type: native
	I1109 14:14:30.931801  274539 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1109 14:14:30.931833  274539 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-331530 && echo "newest-cni-331530" | sudo tee /etc/hostname
	I1109 14:14:31.065432  274539 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-331530
	
	I1109 14:14:31.065515  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:31.083548  274539 main.go:143] libmachine: Using SSH client type: native
	I1109 14:14:31.083824  274539 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1109 14:14:31.083853  274539 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-331530' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-331530/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-331530' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:14:31.208936  274539 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:14:31.208962  274539 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-5854/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-5854/.minikube}
	I1109 14:14:31.208980  274539 ubuntu.go:190] setting up certificates
	I1109 14:14:31.208988  274539 provision.go:84] configureAuth start
	I1109 14:14:31.209030  274539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-331530
	I1109 14:14:31.228082  274539 provision.go:143] copyHostCerts
	I1109 14:14:31.228132  274539 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem, removing ...
	I1109 14:14:31.228148  274539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem
	I1109 14:14:31.228210  274539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem (1679 bytes)
	I1109 14:14:31.228302  274539 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem, removing ...
	I1109 14:14:31.228311  274539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem
	I1109 14:14:31.228339  274539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem (1078 bytes)
	I1109 14:14:31.228431  274539 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem, removing ...
	I1109 14:14:31.228447  274539 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem
	I1109 14:14:31.228477  274539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem (1123 bytes)
	I1109 14:14:31.228542  274539 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem org=jenkins.newest-cni-331530 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-331530]
	I1109 14:14:31.677040  274539 provision.go:177] copyRemoteCerts
	I1109 14:14:31.677126  274539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:14:31.677177  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:31.695249  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:31.788325  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 14:14:31.805579  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1109 14:14:31.822478  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:14:31.839249  274539 provision.go:87] duration metric: took 630.251605ms to configureAuth
	I1109 14:14:31.839268  274539 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:14:31.839440  274539 config.go:182] Loaded profile config "newest-cni-331530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:14:31.839545  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:31.858063  274539 main.go:143] libmachine: Using SSH client type: native
	I1109 14:14:31.858358  274539 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1109 14:14:31.858385  274539 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:14:32.123352  274539 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:14:32.123375  274539 machine.go:97] duration metric: took 4.361263068s to provisionDockerMachine
	I1109 14:14:32.123388  274539 start.go:293] postStartSetup for "newest-cni-331530" (driver="docker")
	I1109 14:14:32.123400  274539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:14:32.123449  274539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:14:32.123487  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:32.141670  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:32.233243  274539 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:14:32.236496  274539 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:14:32.236518  274539 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:14:32.236527  274539 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/addons for local assets ...
	I1109 14:14:32.236571  274539 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/files for local assets ...
	I1109 14:14:32.236651  274539 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem -> 93652.pem in /etc/ssl/certs
	I1109 14:14:32.236742  274539 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:14:32.243898  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:14:32.260427  274539 start.go:296] duration metric: took 137.029267ms for postStartSetup
	I1109 14:14:32.260497  274539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:14:32.260537  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:32.278838  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:32.367429  274539 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:14:32.371740  274539 fix.go:56] duration metric: took 4.95998604s for fixHost
	I1109 14:14:32.371766  274539 start.go:83] releasing machines lock for "newest-cni-331530", held for 4.96003446s
	I1109 14:14:32.371820  274539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-331530
	I1109 14:14:32.390330  274539 ssh_runner.go:195] Run: cat /version.json
	I1109 14:14:32.390386  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:32.390407  274539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:14:32.390472  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:32.407704  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:32.408755  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:32.564613  274539 ssh_runner.go:195] Run: systemctl --version
	I1109 14:14:32.571012  274539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:14:32.606920  274539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:14:32.612027  274539 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:14:32.612087  274539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:14:32.620330  274539 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:14:32.620351  274539 start.go:496] detecting cgroup driver to use...
	I1109 14:14:32.620392  274539 detect.go:190] detected "systemd" cgroup driver on host os
	I1109 14:14:32.620432  274539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:14:32.633961  274539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:14:32.645344  274539 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:14:32.645400  274539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:14:32.658618  274539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:14:32.670206  274539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:14:32.749620  274539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:14:32.829965  274539 docker.go:234] disabling docker service ...
	I1109 14:14:32.830022  274539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:14:32.843428  274539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:14:32.854871  274539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:14:32.939565  274539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:14:33.019395  274539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:14:33.031615  274539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:14:33.045402  274539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:14:33.045488  274539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.053882  274539 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1109 14:14:33.053932  274539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.062452  274539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.070568  274539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.078831  274539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:14:33.086248  274539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.094137  274539 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.102098  274539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:14:33.110267  274539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:14:33.117230  274539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:14:33.123967  274539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:14:33.203147  274539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:14:33.313625  274539 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:14:33.313705  274539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:14:33.317538  274539 start.go:564] Will wait 60s for crictl version
	I1109 14:14:33.317594  274539 ssh_runner.go:195] Run: which crictl
	I1109 14:14:33.321065  274539 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:14:33.345284  274539 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:14:33.345340  274539 ssh_runner.go:195] Run: crio --version
	I1109 14:14:33.371633  274539 ssh_runner.go:195] Run: crio --version
	I1109 14:14:33.400250  274539 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:14:33.401590  274539 cli_runner.go:164] Run: docker network inspect newest-cni-331530 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:14:33.419331  274539 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1109 14:14:33.423271  274539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:14:33.434564  274539 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1109 14:14:32.504467  264151 pod_ready.go:94] pod "coredns-66bc5c9577-bbnm4" is "Ready"
	I1109 14:14:32.504496  264151 pod_ready.go:86] duration metric: took 34.504778764s for pod "coredns-66bc5c9577-bbnm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.506955  264151 pod_ready.go:83] waiting for pod "etcd-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.510570  264151 pod_ready.go:94] pod "etcd-embed-certs-273180" is "Ready"
	I1109 14:14:32.510590  264151 pod_ready.go:86] duration metric: took 3.614216ms for pod "etcd-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.512402  264151 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.515898  264151 pod_ready.go:94] pod "kube-apiserver-embed-certs-273180" is "Ready"
	I1109 14:14:32.515921  264151 pod_ready.go:86] duration metric: took 3.495327ms for pod "kube-apiserver-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.517532  264151 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.703898  264151 pod_ready.go:94] pod "kube-controller-manager-embed-certs-273180" is "Ready"
	I1109 14:14:32.703925  264151 pod_ready.go:86] duration metric: took 186.376206ms for pod "kube-controller-manager-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:32.902976  264151 pod_ready.go:83] waiting for pod "kube-proxy-k6zsl" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:33.303238  264151 pod_ready.go:94] pod "kube-proxy-k6zsl" is "Ready"
	I1109 14:14:33.303266  264151 pod_ready.go:86] duration metric: took 400.264059ms for pod "kube-proxy-k6zsl" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:33.503415  264151 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:33.903284  264151 pod_ready.go:94] pod "kube-scheduler-embed-certs-273180" is "Ready"
	I1109 14:14:33.903309  264151 pod_ready.go:86] duration metric: took 399.863623ms for pod "kube-scheduler-embed-certs-273180" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:33.903322  264151 pod_ready.go:40] duration metric: took 35.907389797s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:14:33.951503  264151 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:14:33.954226  264151 out.go:179] * Done! kubectl is now configured to use "embed-certs-273180" cluster and "default" namespace by default
	W1109 14:14:31.306521  268505 node_ready.go:57] node "auto-593530" has "Ready":"False" status (will retry)
	W1109 14:14:33.807024  268505 node_ready.go:57] node "auto-593530" has "Ready":"False" status (will retry)
	I1109 14:14:33.435777  274539 kubeadm.go:884] updating cluster {Name:newest-cni-331530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-331530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:14:33.436335  274539 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:14:33.436448  274539 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:14:33.468543  274539 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:14:33.468564  274539 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:14:33.468605  274539 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:14:33.493302  274539 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:14:33.493321  274539 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:14:33.493331  274539 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1109 14:14:33.493431  274539 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-331530 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-331530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:14:33.493502  274539 ssh_runner.go:195] Run: crio config
	I1109 14:14:33.539072  274539 cni.go:84] Creating CNI manager for ""
	I1109 14:14:33.539095  274539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:14:33.539111  274539 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1109 14:14:33.539142  274539 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-331530 NodeName:newest-cni-331530 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:14:33.539279  274539 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-331530"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:14:33.539348  274539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:14:33.547453  274539 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:14:33.547513  274539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:14:33.554838  274539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1109 14:14:33.567355  274539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:14:33.579294  274539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1109 14:14:33.590819  274539 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:14:33.594197  274539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:14:33.603341  274539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:14:33.683905  274539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:14:33.721780  274539 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530 for IP: 192.168.76.2
	I1109 14:14:33.721801  274539 certs.go:195] generating shared ca certs ...
	I1109 14:14:33.721820  274539 certs.go:227] acquiring lock for ca certs: {Name:mk1fbc7fb5aaf0e87090d75145f91c095ef07289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:33.721968  274539 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key
	I1109 14:14:33.722021  274539 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key
	I1109 14:14:33.722032  274539 certs.go:257] generating profile certs ...
	I1109 14:14:33.722135  274539 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/client.key
	I1109 14:14:33.722199  274539 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.key.5fb0b4cb
	I1109 14:14:33.722252  274539 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/proxy-client.key
	I1109 14:14:33.722385  274539 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem (1338 bytes)
	W1109 14:14:33.722438  274539 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365_empty.pem, impossibly tiny 0 bytes
	I1109 14:14:33.722453  274539 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:14:33.722488  274539 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 14:14:33.722523  274539 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:14:33.722555  274539 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 14:14:33.722611  274539 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:14:33.723238  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:14:33.742105  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:14:33.760423  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:14:33.780350  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:14:33.804462  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1109 14:14:33.822573  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:14:33.838810  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:14:33.856050  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/newest-cni-331530/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 14:14:33.873441  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /usr/share/ca-certificates/93652.pem (1708 bytes)
	I1109 14:14:33.889777  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:14:33.907245  274539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem --> /usr/share/ca-certificates/9365.pem (1338 bytes)
	I1109 14:14:33.927298  274539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:14:33.939424  274539 ssh_runner.go:195] Run: openssl version
	I1109 14:14:33.945995  274539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9365.pem && ln -fs /usr/share/ca-certificates/9365.pem /etc/ssl/certs/9365.pem"
	I1109 14:14:33.954688  274539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9365.pem
	I1109 14:14:33.958715  274539 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:34 /usr/share/ca-certificates/9365.pem
	I1109 14:14:33.958770  274539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9365.pem
	I1109 14:14:33.997357  274539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9365.pem /etc/ssl/certs/51391683.0"
	I1109 14:14:34.005827  274539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93652.pem && ln -fs /usr/share/ca-certificates/93652.pem /etc/ssl/certs/93652.pem"
	I1109 14:14:34.015472  274539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93652.pem
	I1109 14:14:34.019587  274539 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:34 /usr/share/ca-certificates/93652.pem
	I1109 14:14:34.019636  274539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93652.pem
	I1109 14:14:34.057846  274539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93652.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:14:34.066728  274539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:14:34.074987  274539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:14:34.078819  274539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:14:34.078868  274539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:14:34.114498  274539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:14:34.122522  274539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:14:34.126262  274539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:14:34.160486  274539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:14:34.196162  274539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:14:34.241808  274539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:14:34.290982  274539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:14:34.342552  274539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:14:34.396435  274539 kubeadm.go:401] StartCluster: {Name:newest-cni-331530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-331530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:14:34.396545  274539 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:14:34.396609  274539 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:14:34.431828  274539 cri.go:89] found id: "d22c955680998e8d3360bcb96663b899551daca46c0a944cde9776fb80dea642"
	I1109 14:14:34.431852  274539 cri.go:89] found id: "1526f0e319d499e8e117ed7e56f754a23db76f71937a68ae2f523503033a0707"
	I1109 14:14:34.431858  274539 cri.go:89] found id: "b19747f4b9829ec6cfd1c55b73ac36adb17b5d764e1893d2e89babe3fddbf0d6"
	I1109 14:14:34.431862  274539 cri.go:89] found id: "6f8a1e6423baec933fe43d33cd67337b3c300a7ea659a7a6c65748bb714c0940"
	I1109 14:14:34.431867  274539 cri.go:89] found id: ""
	I1109 14:14:34.431909  274539 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 14:14:34.443922  274539 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:14:34Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:14:34.443987  274539 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:14:34.451921  274539 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:14:34.451945  274539 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:14:34.451985  274539 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:14:34.459577  274539 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:14:34.460472  274539 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-331530" does not appear in /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:14:34.461194  274539 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-5854/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-331530" cluster setting kubeconfig missing "newest-cni-331530" context setting]
	I1109 14:14:34.462308  274539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:34.463959  274539 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:14:34.471814  274539 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1109 14:14:34.471839  274539 kubeadm.go:602] duration metric: took 19.88831ms to restartPrimaryControlPlane
	I1109 14:14:34.471851  274539 kubeadm.go:403] duration metric: took 75.424288ms to StartCluster
	I1109 14:14:34.471865  274539 settings.go:142] acquiring lock: {Name:mk4e77b290eba64589404bd2a5f48c72505ab262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:34.471929  274539 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:14:34.474250  274539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:34.474493  274539 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:14:34.474569  274539 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:14:34.474679  274539 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-331530"
	I1109 14:14:34.474698  274539 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-331530"
	W1109 14:14:34.474707  274539 addons.go:248] addon storage-provisioner should already be in state true
	I1109 14:14:34.474710  274539 addons.go:70] Setting dashboard=true in profile "newest-cni-331530"
	I1109 14:14:34.474733  274539 addons.go:239] Setting addon dashboard=true in "newest-cni-331530"
	I1109 14:14:34.474741  274539 host.go:66] Checking if "newest-cni-331530" exists ...
	W1109 14:14:34.474742  274539 addons.go:248] addon dashboard should already be in state true
	I1109 14:14:34.474741  274539 addons.go:70] Setting default-storageclass=true in profile "newest-cni-331530"
	I1109 14:14:34.474766  274539 config.go:182] Loaded profile config "newest-cni-331530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:14:34.474768  274539 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-331530"
	I1109 14:14:34.474772  274539 host.go:66] Checking if "newest-cni-331530" exists ...
	I1109 14:14:34.475110  274539 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:34.475311  274539 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:34.475379  274539 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:34.476931  274539 out.go:179] * Verifying Kubernetes components...
	I1109 14:14:34.478118  274539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:14:34.498669  274539 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1109 14:14:34.499967  274539 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:14:34.501000  274539 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1109 14:14:34.501055  274539 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:14:34.501069  274539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:14:34.501108  274539 addons.go:239] Setting addon default-storageclass=true in "newest-cni-331530"
	I1109 14:14:34.501118  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	W1109 14:14:34.501127  274539 addons.go:248] addon default-storageclass should already be in state true
	I1109 14:14:34.501153  274539 host.go:66] Checking if "newest-cni-331530" exists ...
	I1109 14:14:34.501594  274539 cli_runner.go:164] Run: docker container inspect newest-cni-331530 --format={{.State.Status}}
	I1109 14:14:34.501929  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1109 14:14:34.501945  274539 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1109 14:14:34.501995  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:34.535820  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:34.537755  274539 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:14:34.537777  274539 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:14:34.537828  274539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-331530
	I1109 14:14:34.541015  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:34.559704  274539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/newest-cni-331530/id_rsa Username:docker}
	I1109 14:14:34.613162  274539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:14:34.625580  274539 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:14:34.625672  274539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:14:34.636706  274539 api_server.go:72] duration metric: took 162.184344ms to wait for apiserver process to appear ...
	I1109 14:14:34.636730  274539 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:14:34.636748  274539 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:14:34.645161  274539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:14:34.648499  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1109 14:14:34.648519  274539 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1109 14:14:34.661828  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1109 14:14:34.661852  274539 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1109 14:14:34.666411  274539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:14:34.675831  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1109 14:14:34.675849  274539 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1109 14:14:34.690000  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1109 14:14:34.690016  274539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1109 14:14:34.705252  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1109 14:14:34.705272  274539 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1109 14:14:34.719515  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1109 14:14:34.719540  274539 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1109 14:14:34.732790  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1109 14:14:34.732819  274539 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1109 14:14:34.745667  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1109 14:14:34.745693  274539 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1109 14:14:34.757973  274539 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:14:34.757995  274539 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1109 14:14:34.770563  274539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:14:36.171483  274539 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1109 14:14:36.171528  274539 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1109 14:14:36.171551  274539 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:14:36.194157  274539 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1109 14:14:36.194201  274539 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1109 14:14:36.637271  274539 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:14:36.641625  274539 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:14:36.641685  274539 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:14:36.682804  274539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.037614109s)
	I1109 14:14:36.682849  274539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.016394225s)
	I1109 14:14:36.682940  274539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.912351833s)
	I1109 14:14:36.684411  274539 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-331530 addons enable metrics-server
	
	I1109 14:14:36.692971  274539 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1109 14:14:36.694141  274539 addons.go:515] duration metric: took 2.219578634s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1109 14:14:37.137025  274539 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:14:37.141940  274539 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:14:37.141967  274539 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:14:37.637235  274539 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:14:37.641207  274539 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1109 14:14:37.642117  274539 api_server.go:141] control plane version: v1.34.1
	I1109 14:14:37.642139  274539 api_server.go:131] duration metric: took 3.005402836s to wait for apiserver health ...
	I1109 14:14:37.642147  274539 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:14:37.645738  274539 system_pods.go:59] 8 kube-system pods found
	I1109 14:14:37.645773  274539 system_pods.go:61] "coredns-66bc5c9577-xvlhm" [ab5d6559-9c58-477e-bae9-e4cedcc2832e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1109 14:14:37.645784  274539 system_pods.go:61] "etcd-newest-cni-331530" [3508f193-5b63-49b0-bbc3-f94d167d8b0c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:14:37.645797  274539 system_pods.go:61] "kindnet-rmtgg" [59572d13-2d29-4a86-bf1d-e75d0dd0d43c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:14:37.645806  274539 system_pods.go:61] "kube-apiserver-newest-cni-331530" [d47aa681-ce72-491a-847d-050b27ac3607] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:14:37.645818  274539 system_pods.go:61] "kube-controller-manager-newest-cni-331530" [ad70a3ff-3aba-485c-a18a-79b65fb30455] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:14:37.645827  274539 system_pods.go:61] "kube-proxy-fkl5q" [faf18639-aeb9-4b17-bb1d-32e85cf54dce] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1109 14:14:37.645837  274539 system_pods.go:61] "kube-scheduler-newest-cni-331530" [e5a8d839-20ee-4400-81dd-abcc742b5c2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:14:37.645846  274539 system_pods.go:61] "storage-provisioner" [77fd8da7-bb4b-4c95-beb6-7d28e7eaabbb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1109 14:14:37.645854  274539 system_pods.go:74] duration metric: took 3.700675ms to wait for pod list to return data ...
	I1109 14:14:37.645865  274539 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:14:37.647993  274539 default_sa.go:45] found service account: "default"
	I1109 14:14:37.648014  274539 default_sa.go:55] duration metric: took 2.143672ms for default service account to be created ...
	I1109 14:14:37.648025  274539 kubeadm.go:587] duration metric: took 3.173506607s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1109 14:14:37.648041  274539 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:14:37.650075  274539 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:14:37.650093  274539 node_conditions.go:123] node cpu capacity is 8
	I1109 14:14:37.650104  274539 node_conditions.go:105] duration metric: took 2.058154ms to run NodePressure ...
	I1109 14:14:37.650116  274539 start.go:242] waiting for startup goroutines ...
	I1109 14:14:37.650129  274539 start.go:247] waiting for cluster config update ...
	I1109 14:14:37.650142  274539 start.go:256] writing updated cluster config ...
	I1109 14:14:37.650382  274539 ssh_runner.go:195] Run: rm -f paused
	I1109 14:14:37.702340  274539 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:14:37.703684  274539 out.go:179] * Done! kubectl is now configured to use "newest-cni-331530" cluster and "default" namespace by default
	I1109 14:14:35.806075  268505 node_ready.go:49] node "auto-593530" is "Ready"
	I1109 14:14:35.806106  268505 node_ready.go:38] duration metric: took 11.002864775s for node "auto-593530" to be "Ready" ...
	I1109 14:14:35.806123  268505 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:14:35.806179  268505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:14:35.818063  268505 api_server.go:72] duration metric: took 11.295640515s to wait for apiserver process to appear ...
	I1109 14:14:35.818095  268505 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:14:35.818111  268505 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1109 14:14:35.823855  268505 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1109 14:14:35.825169  268505 api_server.go:141] control plane version: v1.34.1
	I1109 14:14:35.825199  268505 api_server.go:131] duration metric: took 7.098891ms to wait for apiserver health ...
	I1109 14:14:35.825210  268505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:14:35.828461  268505 system_pods.go:59] 8 kube-system pods found
	I1109 14:14:35.828489  268505 system_pods.go:61] "coredns-66bc5c9577-4t8ck" [81fbebb0-be83-480a-a495-860724bffcd9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:35.828497  268505 system_pods.go:61] "etcd-auto-593530" [bae16052-4662-45e1-acda-ec3ada44231c] Running
	I1109 14:14:35.828505  268505 system_pods.go:61] "kindnet-g9b75" [7f357927-0462-4d9e-aac9-6dff0d558e57] Running
	I1109 14:14:35.828510  268505 system_pods.go:61] "kube-apiserver-auto-593530" [0ad2c113-c806-465c-afda-fe2484e6bce3] Running
	I1109 14:14:35.828514  268505 system_pods.go:61] "kube-controller-manager-auto-593530" [2a3d8b96-ca1f-44f3-8718-ab9742d798d4] Running
	I1109 14:14:35.828520  268505 system_pods.go:61] "kube-proxy-4mbmw" [7fa94b4b-3c32-483b-9292-2292ff14c906] Running
	I1109 14:14:35.828525  268505 system_pods.go:61] "kube-scheduler-auto-593530" [6c14b671-bc78-406d-b96e-40ed07437e0b] Running
	I1109 14:14:35.828532  268505 system_pods.go:61] "storage-provisioner" [47454088-55f9-4c94-9bea-49bab0c8d429] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:35.828543  268505 system_pods.go:74] duration metric: took 3.326746ms to wait for pod list to return data ...
	I1109 14:14:35.828552  268505 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:14:35.833171  268505 default_sa.go:45] found service account: "default"
	I1109 14:14:35.833194  268505 default_sa.go:55] duration metric: took 4.63348ms for default service account to be created ...
	I1109 14:14:35.833203  268505 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:14:35.836004  268505 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:35.836037  268505 system_pods.go:89] "coredns-66bc5c9577-4t8ck" [81fbebb0-be83-480a-a495-860724bffcd9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:35.836045  268505 system_pods.go:89] "etcd-auto-593530" [bae16052-4662-45e1-acda-ec3ada44231c] Running
	I1109 14:14:35.836067  268505 system_pods.go:89] "kindnet-g9b75" [7f357927-0462-4d9e-aac9-6dff0d558e57] Running
	I1109 14:14:35.836081  268505 system_pods.go:89] "kube-apiserver-auto-593530" [0ad2c113-c806-465c-afda-fe2484e6bce3] Running
	I1109 14:14:35.836087  268505 system_pods.go:89] "kube-controller-manager-auto-593530" [2a3d8b96-ca1f-44f3-8718-ab9742d798d4] Running
	I1109 14:14:35.836098  268505 system_pods.go:89] "kube-proxy-4mbmw" [7fa94b4b-3c32-483b-9292-2292ff14c906] Running
	I1109 14:14:35.836103  268505 system_pods.go:89] "kube-scheduler-auto-593530" [6c14b671-bc78-406d-b96e-40ed07437e0b] Running
	I1109 14:14:35.836117  268505 system_pods.go:89] "storage-provisioner" [47454088-55f9-4c94-9bea-49bab0c8d429] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:35.836151  268505 retry.go:31] will retry after 221.840857ms: missing components: kube-dns
	I1109 14:14:36.063432  268505 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:36.063469  268505 system_pods.go:89] "coredns-66bc5c9577-4t8ck" [81fbebb0-be83-480a-a495-860724bffcd9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:36.063477  268505 system_pods.go:89] "etcd-auto-593530" [bae16052-4662-45e1-acda-ec3ada44231c] Running
	I1109 14:14:36.063484  268505 system_pods.go:89] "kindnet-g9b75" [7f357927-0462-4d9e-aac9-6dff0d558e57] Running
	I1109 14:14:36.063490  268505 system_pods.go:89] "kube-apiserver-auto-593530" [0ad2c113-c806-465c-afda-fe2484e6bce3] Running
	I1109 14:14:36.063494  268505 system_pods.go:89] "kube-controller-manager-auto-593530" [2a3d8b96-ca1f-44f3-8718-ab9742d798d4] Running
	I1109 14:14:36.063499  268505 system_pods.go:89] "kube-proxy-4mbmw" [7fa94b4b-3c32-483b-9292-2292ff14c906] Running
	I1109 14:14:36.063504  268505 system_pods.go:89] "kube-scheduler-auto-593530" [6c14b671-bc78-406d-b96e-40ed07437e0b] Running
	I1109 14:14:36.063511  268505 system_pods.go:89] "storage-provisioner" [47454088-55f9-4c94-9bea-49bab0c8d429] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:36.063527  268505 retry.go:31] will retry after 287.97307ms: missing components: kube-dns
	I1109 14:14:36.355243  268505 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:36.355288  268505 system_pods.go:89] "coredns-66bc5c9577-4t8ck" [81fbebb0-be83-480a-a495-860724bffcd9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:36.355298  268505 system_pods.go:89] "etcd-auto-593530" [bae16052-4662-45e1-acda-ec3ada44231c] Running
	I1109 14:14:36.355305  268505 system_pods.go:89] "kindnet-g9b75" [7f357927-0462-4d9e-aac9-6dff0d558e57] Running
	I1109 14:14:36.355310  268505 system_pods.go:89] "kube-apiserver-auto-593530" [0ad2c113-c806-465c-afda-fe2484e6bce3] Running
	I1109 14:14:36.355316  268505 system_pods.go:89] "kube-controller-manager-auto-593530" [2a3d8b96-ca1f-44f3-8718-ab9742d798d4] Running
	I1109 14:14:36.355323  268505 system_pods.go:89] "kube-proxy-4mbmw" [7fa94b4b-3c32-483b-9292-2292ff14c906] Running
	I1109 14:14:36.355328  268505 system_pods.go:89] "kube-scheduler-auto-593530" [6c14b671-bc78-406d-b96e-40ed07437e0b] Running
	I1109 14:14:36.355335  268505 system_pods.go:89] "storage-provisioner" [47454088-55f9-4c94-9bea-49bab0c8d429] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:36.355355  268505 retry.go:31] will retry after 457.71668ms: missing components: kube-dns
	I1109 14:14:36.816945  268505 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:36.816977  268505 system_pods.go:89] "coredns-66bc5c9577-4t8ck" [81fbebb0-be83-480a-a495-860724bffcd9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:14:36.816983  268505 system_pods.go:89] "etcd-auto-593530" [bae16052-4662-45e1-acda-ec3ada44231c] Running
	I1109 14:14:36.816989  268505 system_pods.go:89] "kindnet-g9b75" [7f357927-0462-4d9e-aac9-6dff0d558e57] Running
	I1109 14:14:36.816992  268505 system_pods.go:89] "kube-apiserver-auto-593530" [0ad2c113-c806-465c-afda-fe2484e6bce3] Running
	I1109 14:14:36.816995  268505 system_pods.go:89] "kube-controller-manager-auto-593530" [2a3d8b96-ca1f-44f3-8718-ab9742d798d4] Running
	I1109 14:14:36.816999  268505 system_pods.go:89] "kube-proxy-4mbmw" [7fa94b4b-3c32-483b-9292-2292ff14c906] Running
	I1109 14:14:36.817001  268505 system_pods.go:89] "kube-scheduler-auto-593530" [6c14b671-bc78-406d-b96e-40ed07437e0b] Running
	I1109 14:14:36.817006  268505 system_pods.go:89] "storage-provisioner" [47454088-55f9-4c94-9bea-49bab0c8d429] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:14:36.817021  268505 retry.go:31] will retry after 422.194538ms: missing components: kube-dns
	I1109 14:14:37.244097  268505 system_pods.go:86] 8 kube-system pods found
	I1109 14:14:37.244128  268505 system_pods.go:89] "coredns-66bc5c9577-4t8ck" [81fbebb0-be83-480a-a495-860724bffcd9] Running
	I1109 14:14:37.244135  268505 system_pods.go:89] "etcd-auto-593530" [bae16052-4662-45e1-acda-ec3ada44231c] Running
	I1109 14:14:37.244140  268505 system_pods.go:89] "kindnet-g9b75" [7f357927-0462-4d9e-aac9-6dff0d558e57] Running
	I1109 14:14:37.244145  268505 system_pods.go:89] "kube-apiserver-auto-593530" [0ad2c113-c806-465c-afda-fe2484e6bce3] Running
	I1109 14:14:37.244150  268505 system_pods.go:89] "kube-controller-manager-auto-593530" [2a3d8b96-ca1f-44f3-8718-ab9742d798d4] Running
	I1109 14:14:37.244156  268505 system_pods.go:89] "kube-proxy-4mbmw" [7fa94b4b-3c32-483b-9292-2292ff14c906] Running
	I1109 14:14:37.244163  268505 system_pods.go:89] "kube-scheduler-auto-593530" [6c14b671-bc78-406d-b96e-40ed07437e0b] Running
	I1109 14:14:37.244168  268505 system_pods.go:89] "storage-provisioner" [47454088-55f9-4c94-9bea-49bab0c8d429] Running
	I1109 14:14:37.244179  268505 system_pods.go:126] duration metric: took 1.410969005s to wait for k8s-apps to be running ...
	I1109 14:14:37.244189  268505 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:14:37.244238  268505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:14:37.259804  268505 system_svc.go:56] duration metric: took 15.607114ms WaitForService to wait for kubelet
	I1109 14:14:37.259830  268505 kubeadm.go:587] duration metric: took 12.737412837s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:14:37.259896  268505 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:14:37.262679  268505 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:14:37.262707  268505 node_conditions.go:123] node cpu capacity is 8
	I1109 14:14:37.262722  268505 node_conditions.go:105] duration metric: took 2.815329ms to run NodePressure ...
	I1109 14:14:37.262735  268505 start.go:242] waiting for startup goroutines ...
	I1109 14:14:37.262744  268505 start.go:247] waiting for cluster config update ...
	I1109 14:14:37.262753  268505 start.go:256] writing updated cluster config ...
	I1109 14:14:37.262965  268505 ssh_runner.go:195] Run: rm -f paused
	I1109 14:14:37.267115  268505 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:14:37.270893  268505 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4t8ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.274992  268505 pod_ready.go:94] pod "coredns-66bc5c9577-4t8ck" is "Ready"
	I1109 14:14:37.275012  268505 pod_ready.go:86] duration metric: took 4.094749ms for pod "coredns-66bc5c9577-4t8ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.276932  268505 pod_ready.go:83] waiting for pod "etcd-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.280633  268505 pod_ready.go:94] pod "etcd-auto-593530" is "Ready"
	I1109 14:14:37.280664  268505 pod_ready.go:86] duration metric: took 3.711388ms for pod "etcd-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.282434  268505 pod_ready.go:83] waiting for pod "kube-apiserver-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.286082  268505 pod_ready.go:94] pod "kube-apiserver-auto-593530" is "Ready"
	I1109 14:14:37.286101  268505 pod_ready.go:86] duration metric: took 3.647964ms for pod "kube-apiserver-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.287867  268505 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.671541  268505 pod_ready.go:94] pod "kube-controller-manager-auto-593530" is "Ready"
	I1109 14:14:37.671564  268505 pod_ready.go:86] duration metric: took 383.676466ms for pod "kube-controller-manager-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:37.871247  268505 pod_ready.go:83] waiting for pod "kube-proxy-4mbmw" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:38.271957  268505 pod_ready.go:94] pod "kube-proxy-4mbmw" is "Ready"
	I1109 14:14:38.271985  268505 pod_ready.go:86] duration metric: took 400.714602ms for pod "kube-proxy-4mbmw" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:38.472184  268505 pod_ready.go:83] waiting for pod "kube-scheduler-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:38.872048  268505 pod_ready.go:94] pod "kube-scheduler-auto-593530" is "Ready"
	I1109 14:14:38.872075  268505 pod_ready.go:86] duration metric: took 399.867482ms for pod "kube-scheduler-auto-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:14:38.872090  268505 pod_ready.go:40] duration metric: took 1.604942921s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:14:38.918703  268505 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:14:38.920127  268505 out.go:179] * Done! kubectl is now configured to use "auto-593530" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:14:27 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:27.610452364Z" level=info msg="Starting container: 34b82eaa4d674fcce6179faed1e8afa10e5313e488cba37a2d43a0f612da37ae" id=4c0940ca-85bd-4f76-a1ba-ca9fa417feab name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:14:27 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:27.612450776Z" level=info msg="Started container" PID=1854 containerID=34b82eaa4d674fcce6179faed1e8afa10e5313e488cba37a2d43a0f612da37ae description=kube-system/coredns-66bc5c9577-z8lkx/coredns id=4c0940ca-85bd-4f76-a1ba-ca9fa417feab name=/runtime.v1.RuntimeService/StartContainer sandboxID=15d3c22bf549f2b92ac9292395c00dbb6038763dc639bde3bbfb4bd3efb62633
	Nov 09 14:14:30 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:30.863693218Z" level=info msg="Running pod sandbox: default/busybox/POD" id=6d9634a7-ab2e-4f4a-b540-612ffdf82051 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:14:30 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:30.863783727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:30 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:30.868449452Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ef6c4b3b693f10a3b54b64dd2ae929ada20658fca7846c85d10ff76db529107d UID:fc5f7a0f-3467-424e-a629-38217364cc98 NetNS:/var/run/netns/f418ee94-416c-4ca9-bb71-ba031b6853b9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a880}] Aliases:map[]}"
	Nov 09 14:14:30 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:30.868486224Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 09 14:14:30 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:30.877677721Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ef6c4b3b693f10a3b54b64dd2ae929ada20658fca7846c85d10ff76db529107d UID:fc5f7a0f-3467-424e-a629-38217364cc98 NetNS:/var/run/netns/f418ee94-416c-4ca9-bb71-ba031b6853b9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a880}] Aliases:map[]}"
	Nov 09 14:14:30 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:30.877798392Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 09 14:14:30 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:30.878461534Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 09 14:14:30 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:30.879605819Z" level=info msg="Ran pod sandbox ef6c4b3b693f10a3b54b64dd2ae929ada20658fca7846c85d10ff76db529107d with infra container: default/busybox/POD" id=6d9634a7-ab2e-4f4a-b540-612ffdf82051 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:14:30 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:30.880627745Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=df08d387-9cea-4bf3-af8d-a420f4b37537 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:30 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:30.880765031Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=df08d387-9cea-4bf3-af8d-a420f4b37537 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:30 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:30.880801772Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=df08d387-9cea-4bf3-af8d-a420f4b37537 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:30 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:30.881514022Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f137ce61-881c-46af-8321-f90352ef602b name=/runtime.v1.ImageService/PullImage
	Nov 09 14:14:30 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:30.884241608Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 09 14:14:31 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:31.584577284Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f137ce61-881c-46af-8321-f90352ef602b name=/runtime.v1.ImageService/PullImage
	Nov 09 14:14:31 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:31.585276761Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=adc3dce9-f603-4492-a04c-a27022506001 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:31 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:31.586506182Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eaaade38-d871-4d7e-84a6-0d2134834bc4 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:31 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:31.589685015Z" level=info msg="Creating container: default/busybox/busybox" id=f15fea46-a13c-4c2c-9dc4-0ed5ee6e3f7c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:31 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:31.589798011Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:31 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:31.59416675Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:31 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:31.594669743Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:31 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:31.624689091Z" level=info msg="Created container f67a4d548d067db75851cd478e4134fc7f65158147ff6e327191ad4a8ff93775: default/busybox/busybox" id=f15fea46-a13c-4c2c-9dc4-0ed5ee6e3f7c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:31 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:31.625143964Z" level=info msg="Starting container: f67a4d548d067db75851cd478e4134fc7f65158147ff6e327191ad4a8ff93775" id=51138f75-4edb-4537-8a78-9dc0efa2029f name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:14:31 default-k8s-diff-port-326524 crio[780]: time="2025-11-09T14:14:31.626835512Z" level=info msg="Started container" PID=1935 containerID=f67a4d548d067db75851cd478e4134fc7f65158147ff6e327191ad4a8ff93775 description=default/busybox/busybox id=51138f75-4edb-4537-8a78-9dc0efa2029f name=/runtime.v1.RuntimeService/StartContainer sandboxID=ef6c4b3b693f10a3b54b64dd2ae929ada20658fca7846c85d10ff76db529107d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	f67a4d548d067       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago        Running             busybox                   0                   ef6c4b3b693f1       busybox                                                default
	34b82eaa4d674       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago       Running             coredns                   0                   15d3c22bf549f       coredns-66bc5c9577-z8lkx                               kube-system
	7b24d06752df1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago       Running             storage-provisioner       0                   0d835acc24762       storage-provisioner                                    kube-system
	ee8b640d48a43       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      53 seconds ago       Running             kube-proxy                0                   1a24d674edbb9       kube-proxy-n95wb                                       kube-system
	aaa479f6c35e5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      53 seconds ago       Running             kindnet-cni               0                   76a197eea56d3       kindnet-fdxsl                                          kube-system
	a105d1d23880c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      About a minute ago   Running             etcd                      0                   89e96948fd883       etcd-default-k8s-diff-port-326524                      kube-system
	89098171562ca       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      About a minute ago   Running             kube-apiserver            0                   02b1496b1fafc       kube-apiserver-default-k8s-diff-port-326524            kube-system
	b9951b4b7d1b5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      About a minute ago   Running             kube-scheduler            0                   f130ff18a3385       kube-scheduler-default-k8s-diff-port-326524            kube-system
	e52368a50ddc1       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      About a minute ago   Running             kube-controller-manager   0                   50d4ff1af4c10       kube-controller-manager-default-k8s-diff-port-326524   kube-system
	
	
	==> coredns [34b82eaa4d674fcce6179faed1e8afa10e5313e488cba37a2d43a0f612da37ae] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59353 - 38355 "HINFO IN 9221452048412262694.6870049905192335567. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.475668673s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-326524
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-326524
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=default-k8s-diff-port-326524
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_13_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:13:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-326524
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:14:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:14:31 +0000   Sun, 09 Nov 2025 14:13:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:14:31 +0000   Sun, 09 Nov 2025 14:13:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:14:31 +0000   Sun, 09 Nov 2025 14:13:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:14:31 +0000   Sun, 09 Nov 2025 14:14:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-326524
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                d901abab-4a5c-4bab-8d2e-5eebe721a5ed
	  Boot ID:                    f61b57f8-a461-4dd6-b921-37a11bd88f0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-z8lkx                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     54s
	  kube-system                 etcd-default-k8s-diff-port-326524                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         60s
	  kube-system                 kindnet-fdxsl                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-326524             250m (3%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-326524    200m (2%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-n95wb                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-326524             100m (1%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 53s                kube-proxy       
	  Normal  Starting                 65s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)  kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)  kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x8 over 65s)  kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasSufficientPID
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s                kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s                kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s                kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s                node-controller  Node default-k8s-diff-port-326524 event: Registered Node default-k8s-diff-port-326524 in Controller
	  Normal  NodeReady                13s                kubelet          Node default-k8s-diff-port-326524 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.090313] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.571631] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 9 13:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.019189] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023897] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +2.047759] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +4.031604] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +8.447123] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[ +16.382306] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 13:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	
	
	==> etcd [a105d1d23880c0189ba11188c47263d70880a3b285d57c5a99d21dfe149eeb65] <==
	{"level":"info","ts":"2025-11-09T14:13:46.064254Z","caller":"traceutil/trace.go:172","msg":"trace[452467714] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"177.054599ms","start":"2025-11-09T14:13:45.887188Z","end":"2025-11-09T14:13:46.064243Z","steps":["trace[452467714] 'process raft request'  (duration: 176.537514ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:13:46.285622Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.259561ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-09T14:13:46.285694Z","caller":"traceutil/trace.go:172","msg":"trace[464448776] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:371; }","duration":"154.339959ms","start":"2025-11-09T14:13:46.131339Z","end":"2025-11-09T14:13:46.285679Z","steps":["trace[464448776] 'agreement among raft nodes before linearized reading'  (duration: 44.919158ms)","trace[464448776] 'range keys from in-memory index tree'  (duration: 109.153651ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:13:46.286116Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.218864ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596946175712654 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:288 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:3924 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-09T14:13:46.286208Z","caller":"traceutil/trace.go:172","msg":"trace[1750041041] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"157.080842ms","start":"2025-11-09T14:13:46.129107Z","end":"2025-11-09T14:13:46.286188Z","steps":["trace[1750041041] 'process raft request'  (duration: 47.22119ms)","trace[1750041041] 'compare'  (duration: 109.124327ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:13:46.286295Z","caller":"traceutil/trace.go:172","msg":"trace[1311972559] linearizableReadLoop","detail":"{readStateIndex:383; appliedIndex:382; }","duration":"110.042785ms","start":"2025-11-09T14:13:46.176238Z","end":"2025-11-09T14:13:46.286280Z","steps":["trace[1311972559] 'read index received'  (duration: 109.203996ms)","trace[1311972559] 'applied index is now lower than readState.Index'  (duration: 837.524µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:13:46.286370Z","caller":"traceutil/trace.go:172","msg":"trace[1677569888] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"152.362794ms","start":"2025-11-09T14:13:46.133998Z","end":"2025-11-09T14:13:46.286361Z","steps":["trace[1677569888] 'process raft request'  (duration: 152.326418ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:13:46.286532Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"152.936523ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-11-09T14:13:46.286558Z","caller":"traceutil/trace.go:172","msg":"trace[605072617] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:375; }","duration":"152.96638ms","start":"2025-11-09T14:13:46.133583Z","end":"2025-11-09T14:13:46.286549Z","steps":["trace[605072617] 'agreement among raft nodes before linearized reading'  (duration: 152.869198ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:46.286399Z","caller":"traceutil/trace.go:172","msg":"trace[226971785] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"153.582861ms","start":"2025-11-09T14:13:46.132805Z","end":"2025-11-09T14:13:46.286388Z","steps":["trace[226971785] 'process raft request'  (duration: 153.392608ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:46.286430Z","caller":"traceutil/trace.go:172","msg":"trace[976027201] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"153.148255ms","start":"2025-11-09T14:13:46.133273Z","end":"2025-11-09T14:13:46.286421Z","steps":["trace[976027201] 'process raft request'  (duration: 152.998584ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:13:46.515901Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.906911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:693"}
	{"level":"info","ts":"2025-11-09T14:13:46.515966Z","caller":"traceutil/trace.go:172","msg":"trace[2055639699] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:377; }","duration":"140.988976ms","start":"2025-11-09T14:13:46.374961Z","end":"2025-11-09T14:13:46.515950Z","steps":["trace[2055639699] 'agreement among raft nodes before linearized reading'  (duration: 74.060884ms)","trace[2055639699] 'range keys from in-memory index tree'  (duration: 66.718523ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:13:46.515985Z","caller":"traceutil/trace.go:172","msg":"trace[1684852252] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"185.798025ms","start":"2025-11-09T14:13:46.330171Z","end":"2025-11-09T14:13:46.515969Z","steps":["trace[1684852252] 'process raft request'  (duration: 185.696761ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:46.516037Z","caller":"traceutil/trace.go:172","msg":"trace[1452100132] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"185.325141ms","start":"2025-11-09T14:13:46.330706Z","end":"2025-11-09T14:13:46.516031Z","steps":["trace[1452100132] 'process raft request'  (duration: 185.202712ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:46.516068Z","caller":"traceutil/trace.go:172","msg":"trace[385958861] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"180.526697ms","start":"2025-11-09T14:13:46.335536Z","end":"2025-11-09T14:13:46.516063Z","steps":["trace[385958861] 'process raft request'  (duration: 180.415457ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:46.516037Z","caller":"traceutil/trace.go:172","msg":"trace[1816464836] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"140.608086ms","start":"2025-11-09T14:13:46.375418Z","end":"2025-11-09T14:13:46.516026Z","steps":["trace[1816464836] 'process raft request'  (duration: 140.578652ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:46.515986Z","caller":"traceutil/trace.go:172","msg":"trace[1863706737] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"185.784794ms","start":"2025-11-09T14:13:46.330174Z","end":"2025-11-09T14:13:46.515959Z","steps":["trace[1863706737] 'process raft request'  (duration: 118.91871ms)","trace[1863706737] 'compare'  (duration: 66.6353ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:13:46.699283Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.34982ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:693"}
	{"level":"info","ts":"2025-11-09T14:13:46.699350Z","caller":"traceutil/trace.go:172","msg":"trace[763300798] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:387; }","duration":"105.435933ms","start":"2025-11-09T14:13:46.593899Z","end":"2025-11-09T14:13:46.699335Z","steps":["trace[763300798] 'agreement among raft nodes before linearized reading'  (duration: 79.807372ms)","trace[763300798] 'range keys from in-memory index tree'  (duration: 25.442284ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:13:46.699355Z","caller":"traceutil/trace.go:172","msg":"trace[425275778] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"165.870595ms","start":"2025-11-09T14:13:46.533468Z","end":"2025-11-09T14:13:46.699338Z","steps":["trace[425275778] 'process raft request'  (duration: 140.303923ms)","trace[425275778] 'compare'  (duration: 25.385955ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:13:46.814795Z","caller":"traceutil/trace.go:172","msg":"trace[1361944150] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"108.993667ms","start":"2025-11-09T14:13:46.705790Z","end":"2025-11-09T14:13:46.814784Z","steps":["trace[1361944150] 'process raft request'  (duration: 100.874325ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:46.814740Z","caller":"traceutil/trace.go:172","msg":"trace[1064362036] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"108.564105ms","start":"2025-11-09T14:13:46.706152Z","end":"2025-11-09T14:13:46.814716Z","steps":["trace[1064362036] 'process raft request'  (duration: 108.233578ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:58.923596Z","caller":"traceutil/trace.go:172","msg":"trace[793428442] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"121.400855ms","start":"2025-11-09T14:13:58.802178Z","end":"2025-11-09T14:13:58.923579Z","steps":["trace[793428442] 'process raft request'  (duration: 121.286558ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:59.724200Z","caller":"traceutil/trace.go:172","msg":"trace[2024105581] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"129.661813ms","start":"2025-11-09T14:13:59.594517Z","end":"2025-11-09T14:13:59.724179Z","steps":["trace[2024105581] 'process raft request'  (duration: 125.99886ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:14:40 up 57 min,  0 user,  load average: 5.82, 3.76, 2.24
	Linux default-k8s-diff-port-326524 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [aaa479f6c35e5b6fb8328bbd170d7e9be21970b4251c2f4bbe1b78a1360839b6] <==
	I1109 14:13:46.716490       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:13:46.791440       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1109 14:13:46.791595       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:13:46.791613       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:13:46.791633       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:13:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:13:47.015008       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:13:47.015039       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:13:47.015050       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:13:47.015973       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1109 14:14:17.015339       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1109 14:14:17.015554       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1109 14:14:17.015762       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1109 14:14:17.016454       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1109 14:14:18.515893       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:14:18.515919       1 metrics.go:72] Registering metrics
	I1109 14:14:18.515966       1 controller.go:711] "Syncing nftables rules"
	I1109 14:14:27.021315       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:14:27.021387       1 main.go:301] handling current node
	I1109 14:14:37.015420       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:14:37.015454       1 main.go:301] handling current node
	
	
	==> kube-apiserver [89098171562caf386c393b3b20a126aaec05f87b1e333b2b0bda48446c37fd03] <==
	I1109 14:13:37.989345       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1109 14:13:37.989379       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1109 14:13:37.990624       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:13:37.996443       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1109 14:13:38.001618       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 14:13:38.011951       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:13:38.173831       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:13:38.893170       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1109 14:13:38.897150       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1109 14:13:38.897166       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:13:39.331402       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:13:39.362705       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:13:39.494031       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1109 14:13:39.499018       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1109 14:13:39.499835       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:13:39.503261       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:13:39.905601       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:13:40.628594       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:13:40.637718       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1109 14:13:40.643862       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 14:13:45.459073       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1109 14:13:45.719603       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:13:45.760750       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:13:46.070971       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1109 14:14:38.659373       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:55690: use of closed network connection
	
	
	==> kube-controller-manager [e52368a50ddc16879bb3878783df5a2e29c253d7184527bc5a822d1f75de7e5e] <==
	I1109 14:13:44.762259       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1109 14:13:44.804310       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1109 14:13:44.804345       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1109 14:13:44.805570       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 14:13:44.805692       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1109 14:13:44.805691       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 14:13:44.805819       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1109 14:13:44.805839       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 14:13:44.806153       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 14:13:44.806233       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1109 14:13:44.806731       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1109 14:13:44.807963       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1109 14:13:44.808089       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 14:13:44.809278       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 14:13:44.811544       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:13:44.813793       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1109 14:13:44.817973       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1109 14:13:44.819124       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1109 14:13:44.916165       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-326524" podCIDRs=["10.244.0.0/24"]
	I1109 14:13:45.015097       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1109 14:13:45.106203       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:13:45.106223       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:13:45.106232       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 14:13:45.115296       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:14:29.761907       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ee8b640d48a43b7f58edb1c1831b59eff99e2fe01fa016c4ac7b9205c53f3eaa] <==
	I1109 14:13:46.736413       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:13:46.801897       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:13:46.902594       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:13:46.902662       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1109 14:13:46.902757       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:13:46.927453       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:13:46.927523       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:13:46.933927       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:13:46.934819       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:13:46.935011       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:13:46.937596       1 config.go:200] "Starting service config controller"
	I1109 14:13:46.937669       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:13:46.937745       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:13:46.937765       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:13:46.938727       1 config.go:309] "Starting node config controller"
	I1109 14:13:46.939116       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:13:46.939171       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:13:46.939278       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:13:46.939295       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:13:47.037765       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:13:47.038955       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1109 14:13:47.039389       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b9951b4b7d1b54debe7265ac014654dc1e1f247999286d41de4fa4005cdc69dd] <==
	E1109 14:13:37.943250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 14:13:37.943408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 14:13:37.943482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 14:13:37.943574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 14:13:37.943569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 14:13:37.943581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 14:13:37.943669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 14:13:37.943637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 14:13:37.943799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 14:13:37.943809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 14:13:37.943820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 14:13:37.943843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 14:13:37.943907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 14:13:37.944024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 14:13:37.944024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 14:13:38.753908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 14:13:38.820998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 14:13:38.887516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 14:13:38.930423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 14:13:38.930828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 14:13:38.940467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1109 14:13:38.965934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 14:13:39.071883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 14:13:39.118943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1109 14:13:42.040851       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:13:41 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:13:41.507757    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-326524" podStartSLOduration=1.507708505 podStartE2EDuration="1.507708505s" podCreationTimestamp="2025-11-09 14:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:13:41.507453056 +0000 UTC m=+1.131958273" watchObservedRunningTime="2025-11-09 14:13:41.507708505 +0000 UTC m=+1.132213720"
	Nov 09 14:13:41 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:13:41.531607    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-326524" podStartSLOduration=1.53158541 podStartE2EDuration="1.53158541s" podCreationTimestamp="2025-11-09 14:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:13:41.519952987 +0000 UTC m=+1.144458227" watchObservedRunningTime="2025-11-09 14:13:41.53158541 +0000 UTC m=+1.156090626"
	Nov 09 14:13:41 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:13:41.540612    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-326524" podStartSLOduration=1.540595217 podStartE2EDuration="1.540595217s" podCreationTimestamp="2025-11-09 14:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:13:41.531981439 +0000 UTC m=+1.156486656" watchObservedRunningTime="2025-11-09 14:13:41.540595217 +0000 UTC m=+1.165100432"
	Nov 09 14:13:41 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:13:41.555932    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-326524" podStartSLOduration=1.555912927 podStartE2EDuration="1.555912927s" podCreationTimestamp="2025-11-09 14:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:13:41.540756979 +0000 UTC m=+1.165262191" watchObservedRunningTime="2025-11-09 14:13:41.555912927 +0000 UTC m=+1.180418143"
	Nov 09 14:13:45 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:13:45.004749    1325 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 09 14:13:45 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:13:45.005463    1325 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 09 14:13:45 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:13:45.892972    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t776b\" (UniqueName: \"kubernetes.io/projected/4c264413-e8be-44cf-97d3-3fbdc1ca9aa9-kube-api-access-t776b\") pod \"kindnet-fdxsl\" (UID: \"4c264413-e8be-44cf-97d3-3fbdc1ca9aa9\") " pod="kube-system/kindnet-fdxsl"
	Nov 09 14:13:45 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:13:45.893034    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c264413-e8be-44cf-97d3-3fbdc1ca9aa9-xtables-lock\") pod \"kindnet-fdxsl\" (UID: \"4c264413-e8be-44cf-97d3-3fbdc1ca9aa9\") " pod="kube-system/kindnet-fdxsl"
	Nov 09 14:13:45 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:13:45.893076    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39336fb9-1647-458b-802a-16247e882272-xtables-lock\") pod \"kube-proxy-n95wb\" (UID: \"39336fb9-1647-458b-802a-16247e882272\") " pod="kube-system/kube-proxy-n95wb"
	Nov 09 14:13:45 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:13:45.893099    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39336fb9-1647-458b-802a-16247e882272-lib-modules\") pod \"kube-proxy-n95wb\" (UID: \"39336fb9-1647-458b-802a-16247e882272\") " pod="kube-system/kube-proxy-n95wb"
	Nov 09 14:13:45 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:13:45.893123    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5trc\" (UniqueName: \"kubernetes.io/projected/39336fb9-1647-458b-802a-16247e882272-kube-api-access-f5trc\") pod \"kube-proxy-n95wb\" (UID: \"39336fb9-1647-458b-802a-16247e882272\") " pod="kube-system/kube-proxy-n95wb"
	Nov 09 14:13:45 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:13:45.893152    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c264413-e8be-44cf-97d3-3fbdc1ca9aa9-lib-modules\") pod \"kindnet-fdxsl\" (UID: \"4c264413-e8be-44cf-97d3-3fbdc1ca9aa9\") " pod="kube-system/kindnet-fdxsl"
	Nov 09 14:13:45 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:13:45.893176    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/39336fb9-1647-458b-802a-16247e882272-kube-proxy\") pod \"kube-proxy-n95wb\" (UID: \"39336fb9-1647-458b-802a-16247e882272\") " pod="kube-system/kube-proxy-n95wb"
	Nov 09 14:13:45 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:13:45.893196    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4c264413-e8be-44cf-97d3-3fbdc1ca9aa9-cni-cfg\") pod \"kindnet-fdxsl\" (UID: \"4c264413-e8be-44cf-97d3-3fbdc1ca9aa9\") " pod="kube-system/kindnet-fdxsl"
	Nov 09 14:13:47 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:13:47.523834    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n95wb" podStartSLOduration=2.523810295 podStartE2EDuration="2.523810295s" podCreationTimestamp="2025-11-09 14:13:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:13:47.511358212 +0000 UTC m=+7.135863429" watchObservedRunningTime="2025-11-09 14:13:47.523810295 +0000 UTC m=+7.148315512"
	Nov 09 14:13:47 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:13:47.539532    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-fdxsl" podStartSLOduration=2.53949117 podStartE2EDuration="2.53949117s" podCreationTimestamp="2025-11-09 14:13:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:13:47.523922038 +0000 UTC m=+7.148427254" watchObservedRunningTime="2025-11-09 14:13:47.53949117 +0000 UTC m=+7.163996387"
	Nov 09 14:14:27 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:14:27.215917    1325 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 09 14:14:27 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:14:27.283457    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a7e151f-1d30-4932-acb4-60f6c560cc8a-config-volume\") pod \"coredns-66bc5c9577-z8lkx\" (UID: \"2a7e151f-1d30-4932-acb4-60f6c560cc8a\") " pod="kube-system/coredns-66bc5c9577-z8lkx"
	Nov 09 14:14:27 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:14:27.283508    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rtl5\" (UniqueName: \"kubernetes.io/projected/2a7e151f-1d30-4932-acb4-60f6c560cc8a-kube-api-access-7rtl5\") pod \"coredns-66bc5c9577-z8lkx\" (UID: \"2a7e151f-1d30-4932-acb4-60f6c560cc8a\") " pod="kube-system/coredns-66bc5c9577-z8lkx"
	Nov 09 14:14:27 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:14:27.283541    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6-tmp\") pod \"storage-provisioner\" (UID: \"75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6\") " pod="kube-system/storage-provisioner"
	Nov 09 14:14:27 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:14:27.283572    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njpzj\" (UniqueName: \"kubernetes.io/projected/75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6-kube-api-access-njpzj\") pod \"storage-provisioner\" (UID: \"75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6\") " pod="kube-system/storage-provisioner"
	Nov 09 14:14:28 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:14:28.610946    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-z8lkx" podStartSLOduration=42.610925063 podStartE2EDuration="42.610925063s" podCreationTimestamp="2025-11-09 14:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:14:28.602479518 +0000 UTC m=+48.226984734" watchObservedRunningTime="2025-11-09 14:14:28.610925063 +0000 UTC m=+48.235430279"
	Nov 09 14:14:28 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:14:28.611188    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.611178096 podStartE2EDuration="42.611178096s" podCreationTimestamp="2025-11-09 14:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:14:28.610743591 +0000 UTC m=+48.235248812" watchObservedRunningTime="2025-11-09 14:14:28.611178096 +0000 UTC m=+48.235683311"
	Nov 09 14:14:30 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:14:30.602451    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79f6m\" (UniqueName: \"kubernetes.io/projected/fc5f7a0f-3467-424e-a629-38217364cc98-kube-api-access-79f6m\") pod \"busybox\" (UID: \"fc5f7a0f-3467-424e-a629-38217364cc98\") " pod="default/busybox"
	Nov 09 14:14:32 default-k8s-diff-port-326524 kubelet[1325]: I1109 14:14:32.612934    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.907988818 podStartE2EDuration="2.612915515s" podCreationTimestamp="2025-11-09 14:14:30 +0000 UTC" firstStartedPulling="2025-11-09 14:14:30.881073672 +0000 UTC m=+50.505578866" lastFinishedPulling="2025-11-09 14:14:31.586000369 +0000 UTC m=+51.210505563" observedRunningTime="2025-11-09 14:14:32.612576304 +0000 UTC m=+52.237081520" watchObservedRunningTime="2025-11-09 14:14:32.612915515 +0000 UTC m=+52.237420730"
	
	
	==> storage-provisioner [7b24d06752df147de027a40d994bd1b8a25dc8b2dcb4afaf58db7988781db5ea] <==
	I1109 14:14:27.618684       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:14:27.629093       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:14:27.629149       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 14:14:27.631313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:27.636988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:14:27.637139       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:14:27.637288       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-326524_8bf8ea4a-8cee-4af9-bd85-d336690e33f1!
	I1109 14:14:27.637549       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9456f7ff-bf23-4b3e-a78e-e1e46b0b9684", APIVersion:"v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-326524_8bf8ea4a-8cee-4af9-bd85-d336690e33f1 became leader
	W1109 14:14:27.639505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:27.643557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:14:27.737784       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-326524_8bf8ea4a-8cee-4af9-bd85-d336690e33f1!
	W1109 14:14:29.647066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:29.650559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:31.653005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:31.656122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:33.658687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:33.663907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:35.666809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:35.670745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:37.674486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:37.679517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:39.682594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:39.686875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-326524 -n default-k8s-diff-port-326524
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-326524 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-273180 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-273180 --alsologtostderr -v=1: exit status 80 (1.88354344s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-273180 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:14:46.652782  280106 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:14:46.653026  280106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:14:46.653035  280106 out.go:374] Setting ErrFile to fd 2...
	I1109 14:14:46.653042  280106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:14:46.653325  280106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:14:46.653575  280106 out.go:368] Setting JSON to false
	I1109 14:14:46.653625  280106 mustload.go:66] Loading cluster: embed-certs-273180
	I1109 14:14:46.653984  280106 config.go:182] Loaded profile config "embed-certs-273180": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:14:46.654343  280106 cli_runner.go:164] Run: docker container inspect embed-certs-273180 --format={{.State.Status}}
	I1109 14:14:46.674206  280106 host.go:66] Checking if "embed-certs-273180" exists ...
	I1109 14:14:46.674441  280106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:14:46.732334  280106 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:82 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-09 14:14:46.721053901 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:14:46.733200  280106 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-273180 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1109 14:14:46.737593  280106 out.go:179] * Pausing node embed-certs-273180 ... 
	I1109 14:14:46.739169  280106 host.go:66] Checking if "embed-certs-273180" exists ...
	I1109 14:14:46.739403  280106 ssh_runner.go:195] Run: systemctl --version
	I1109 14:14:46.739437  280106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-273180
	I1109 14:14:46.757365  280106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/embed-certs-273180/id_rsa Username:docker}
	I1109 14:14:46.849897  280106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:14:46.864535  280106 pause.go:52] kubelet running: true
	I1109 14:14:46.864618  280106 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:14:47.047236  280106 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:14:47.047341  280106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:14:47.119043  280106 cri.go:89] found id: "cf92131e5324bc40f78a4c68dc30e47e8eda7809c531a9acd133ee856f3dc7ba"
	I1109 14:14:47.119068  280106 cri.go:89] found id: "cd8e640f3980854681179bacf773403036d5d364e134aba45034feb324e1a5c1"
	I1109 14:14:47.119074  280106 cri.go:89] found id: "961da20af9a59ad0af4140da7675b9435e7dcff1b6393a569154a81aa4bfb681"
	I1109 14:14:47.119078  280106 cri.go:89] found id: "595bbf89163b18285245a51e9a1992b62f9126f3823fbe528329a26e1311df9d"
	I1109 14:14:47.119082  280106 cri.go:89] found id: "9e732a5e0ee324731917ec205a3d4fb92d4e86259ec8e5cb6ae0474e3dfee477"
	I1109 14:14:47.119087  280106 cri.go:89] found id: "4f9f38d1a0f6c3b90459b53a1a0308e519ef2d1e4f12c24e072989aa297eab6c"
	I1109 14:14:47.119091  280106 cri.go:89] found id: "97f5a7b8e8b2ec193df908b13853b3f0d95619f6cc39fc3c693bf5f008f98071"
	I1109 14:14:47.119095  280106 cri.go:89] found id: "976a1e86747e59d5a7c8cdbc6eaef9d6d0fde3a08e20706cee6160921ddf6689"
	I1109 14:14:47.119100  280106 cri.go:89] found id: "9736e800f3ad26c7d4d7a6c889abcad2a30ef0f3907128567e28dbcdd9a9355e"
	I1109 14:14:47.119125  280106 cri.go:89] found id: "95f7825f27c31fe22ae58953c8d5a63a0f9a9fead69df32eceb1a491a3f162eb"
	I1109 14:14:47.119134  280106 cri.go:89] found id: "bb7e516a22712b6d157a69c73f6034857933d87290f12deb79ba4439103faeda"
	I1109 14:14:47.119138  280106 cri.go:89] found id: ""
	I1109 14:14:47.119179  280106 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:14:47.130666  280106 retry.go:31] will retry after 307.301439ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:14:47Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:14:47.438137  280106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:14:47.451479  280106 pause.go:52] kubelet running: false
	I1109 14:14:47.451535  280106 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:14:47.624820  280106 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:14:47.624899  280106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:14:47.694047  280106 cri.go:89] found id: "cf92131e5324bc40f78a4c68dc30e47e8eda7809c531a9acd133ee856f3dc7ba"
	I1109 14:14:47.694066  280106 cri.go:89] found id: "cd8e640f3980854681179bacf773403036d5d364e134aba45034feb324e1a5c1"
	I1109 14:14:47.694070  280106 cri.go:89] found id: "961da20af9a59ad0af4140da7675b9435e7dcff1b6393a569154a81aa4bfb681"
	I1109 14:14:47.694073  280106 cri.go:89] found id: "595bbf89163b18285245a51e9a1992b62f9126f3823fbe528329a26e1311df9d"
	I1109 14:14:47.694077  280106 cri.go:89] found id: "9e732a5e0ee324731917ec205a3d4fb92d4e86259ec8e5cb6ae0474e3dfee477"
	I1109 14:14:47.694082  280106 cri.go:89] found id: "4f9f38d1a0f6c3b90459b53a1a0308e519ef2d1e4f12c24e072989aa297eab6c"
	I1109 14:14:47.694086  280106 cri.go:89] found id: "97f5a7b8e8b2ec193df908b13853b3f0d95619f6cc39fc3c693bf5f008f98071"
	I1109 14:14:47.694090  280106 cri.go:89] found id: "976a1e86747e59d5a7c8cdbc6eaef9d6d0fde3a08e20706cee6160921ddf6689"
	I1109 14:14:47.694094  280106 cri.go:89] found id: "9736e800f3ad26c7d4d7a6c889abcad2a30ef0f3907128567e28dbcdd9a9355e"
	I1109 14:14:47.694113  280106 cri.go:89] found id: "95f7825f27c31fe22ae58953c8d5a63a0f9a9fead69df32eceb1a491a3f162eb"
	I1109 14:14:47.694121  280106 cri.go:89] found id: "bb7e516a22712b6d157a69c73f6034857933d87290f12deb79ba4439103faeda"
	I1109 14:14:47.694125  280106 cri.go:89] found id: ""
	I1109 14:14:47.694166  280106 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:14:47.705883  280106 retry.go:31] will retry after 468.729246ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:14:47Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:14:48.175609  280106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:14:48.199140  280106 pause.go:52] kubelet running: false
	I1109 14:14:48.199204  280106 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:14:48.379815  280106 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:14:48.379877  280106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:14:48.456443  280106 cri.go:89] found id: "cf92131e5324bc40f78a4c68dc30e47e8eda7809c531a9acd133ee856f3dc7ba"
	I1109 14:14:48.456471  280106 cri.go:89] found id: "cd8e640f3980854681179bacf773403036d5d364e134aba45034feb324e1a5c1"
	I1109 14:14:48.456476  280106 cri.go:89] found id: "961da20af9a59ad0af4140da7675b9435e7dcff1b6393a569154a81aa4bfb681"
	I1109 14:14:48.456481  280106 cri.go:89] found id: "595bbf89163b18285245a51e9a1992b62f9126f3823fbe528329a26e1311df9d"
	I1109 14:14:48.456486  280106 cri.go:89] found id: "9e732a5e0ee324731917ec205a3d4fb92d4e86259ec8e5cb6ae0474e3dfee477"
	I1109 14:14:48.456491  280106 cri.go:89] found id: "4f9f38d1a0f6c3b90459b53a1a0308e519ef2d1e4f12c24e072989aa297eab6c"
	I1109 14:14:48.456496  280106 cri.go:89] found id: "97f5a7b8e8b2ec193df908b13853b3f0d95619f6cc39fc3c693bf5f008f98071"
	I1109 14:14:48.456500  280106 cri.go:89] found id: "976a1e86747e59d5a7c8cdbc6eaef9d6d0fde3a08e20706cee6160921ddf6689"
	I1109 14:14:48.456505  280106 cri.go:89] found id: "9736e800f3ad26c7d4d7a6c889abcad2a30ef0f3907128567e28dbcdd9a9355e"
	I1109 14:14:48.456522  280106 cri.go:89] found id: "95f7825f27c31fe22ae58953c8d5a63a0f9a9fead69df32eceb1a491a3f162eb"
	I1109 14:14:48.456530  280106 cri.go:89] found id: "bb7e516a22712b6d157a69c73f6034857933d87290f12deb79ba4439103faeda"
	I1109 14:14:48.456534  280106 cri.go:89] found id: ""
	I1109 14:14:48.456584  280106 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:14:48.471879  280106 out.go:203] 
	W1109 14:14:48.472991  280106 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:14:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:14:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 14:14:48.473013  280106 out.go:285] * 
	* 
	W1109 14:14:48.477296  280106 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 14:14:48.478505  280106 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-273180 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-273180
helpers_test.go:243: (dbg) docker inspect embed-certs-273180:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7",
	        "Created": "2025-11-09T14:12:40.11425745Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 264752,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:13:47.476282214Z",
	            "FinishedAt": "2025-11-09T14:13:44.233721916Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7/hostname",
	        "HostsPath": "/var/lib/docker/containers/da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7/hosts",
	        "LogPath": "/var/lib/docker/containers/da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7/da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7-json.log",
	        "Name": "/embed-certs-273180",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-273180:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-273180",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7",
	                "LowerDir": "/var/lib/docker/overlay2/c9602747b4dc7cd2b99d04208211d49870115e57ef519393f716f3f462a836b0-init/diff:/var/lib/docker/overlay2/d0c6d4b4ffbfb6af64ec3408030fc54111fe30d500088a5ae947d598cfc72f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c9602747b4dc7cd2b99d04208211d49870115e57ef519393f716f3f462a836b0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c9602747b4dc7cd2b99d04208211d49870115e57ef519393f716f3f462a836b0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c9602747b4dc7cd2b99d04208211d49870115e57ef519393f716f3f462a836b0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-273180",
	                "Source": "/var/lib/docker/volumes/embed-certs-273180/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-273180",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-273180",
	                "name.minikube.sigs.k8s.io": "embed-certs-273180",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0263ac70a60688635adb3eb89232838d3ba9f3c55e6e358f3c8c902be4c1ee68",
	            "SandboxKey": "/var/run/docker/netns/0263ac70a606",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-273180": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:af:0a:9d:90:9f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e4394163f33a23d3fe460b68d1b70efd91c45ded0aedfe59220d7876ad042ed",
	                    "EndpointID": "66188b03009b701ae808b83780b139dcfa1b7c2126e6b79dcf4da6a98492d40f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-273180",
	                        "da002f6826ef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-273180 -n embed-certs-273180
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-273180 -n embed-certs-273180: exit status 2 (378.277176ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-273180 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-273180 logs -n 25: (1.082467452s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p old-k8s-version-169816 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ delete  │ -p old-k8s-version-169816                                                                                                                                                                                                                     │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p old-k8s-version-169816                                                                                                                                                                                                                     │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p newest-cni-331530 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:14 UTC │
	│ image   │ no-preload-152932 image list --format=json                                                                                                                                                                                                    │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ pause   │ -p no-preload-152932 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-273180 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p embed-certs-273180 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:14 UTC │
	│ delete  │ -p no-preload-152932                                                                                                                                                                                                                          │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p no-preload-152932                                                                                                                                                                                                                          │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p auto-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:14 UTC │
	│ addons  │ enable metrics-server -p newest-cni-331530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	│ stop    │ -p newest-cni-331530 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ addons  │ enable dashboard -p newest-cni-331530 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ start   │ -p newest-cni-331530 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ image   │ newest-cni-331530 image list --format=json                                                                                                                                                                                                    │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ pause   │ -p newest-cni-331530 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-326524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	│ ssh     │ -p auto-593530 pgrep -a kubelet                                                                                                                                                                                                               │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ stop    │ -p default-k8s-diff-port-326524 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	│ delete  │ -p newest-cni-331530                                                                                                                                                                                                                          │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ image   │ embed-certs-273180 image list --format=json                                                                                                                                                                                                   │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ pause   │ -p embed-certs-273180 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	│ delete  │ -p newest-cni-331530                                                                                                                                                                                                                          │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ start   │ -p kindnet-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-593530               │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:14:47
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:14:47.346520  280419 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:14:47.346614  280419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:14:47.346621  280419 out.go:374] Setting ErrFile to fd 2...
	I1109 14:14:47.346625  280419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:14:47.346844  280419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:14:47.347284  280419 out.go:368] Setting JSON to false
	I1109 14:14:47.348369  280419 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3437,"bootTime":1762694250,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:14:47.348419  280419 start.go:143] virtualization: kvm guest
	I1109 14:14:47.350006  280419 out.go:179] * [kindnet-593530] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:14:47.351163  280419 notify.go:221] Checking for updates...
	I1109 14:14:47.351204  280419 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:14:47.352960  280419 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:14:47.354323  280419 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:14:47.355350  280419 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 14:14:47.356354  280419 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:14:47.357300  280419 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:14:47.358548  280419 config.go:182] Loaded profile config "auto-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:14:47.358666  280419 config.go:182] Loaded profile config "default-k8s-diff-port-326524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:14:47.358762  280419 config.go:182] Loaded profile config "embed-certs-273180": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:14:47.358866  280419 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:14:47.384154  280419 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 14:14:47.384236  280419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:14:47.438024  280419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:83 SystemTime:2025-11-09 14:14:47.428812615 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:14:47.438185  280419 docker.go:319] overlay module found
	I1109 14:14:47.439630  280419 out.go:179] * Using the docker driver based on user configuration
	I1109 14:14:47.440799  280419 start.go:309] selected driver: docker
	I1109 14:14:47.440817  280419 start.go:930] validating driver "docker" against <nil>
	I1109 14:14:47.440831  280419 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:14:47.441553  280419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:14:47.509158  280419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:83 SystemTime:2025-11-09 14:14:47.493492143 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:14:47.509448  280419 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 14:14:47.509777  280419 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:14:47.511618  280419 out.go:179] * Using Docker driver with root privileges
	I1109 14:14:47.512869  280419 cni.go:84] Creating CNI manager for "kindnet"
	I1109 14:14:47.512885  280419 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 14:14:47.512955  280419 start.go:353] cluster config:
	{Name:kindnet-593530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-593530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:14:47.514136  280419 out.go:179] * Starting "kindnet-593530" primary control-plane node in "kindnet-593530" cluster
	I1109 14:14:47.515498  280419 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:14:47.516666  280419 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:14:47.517769  280419 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:14:47.517803  280419 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 14:14:47.517812  280419 cache.go:65] Caching tarball of preloaded images
	I1109 14:14:47.517873  280419 preload.go:238] Found /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 14:14:47.517875  280419 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:14:47.517883  280419 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:14:47.517974  280419 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/kindnet-593530/config.json ...
	I1109 14:14:47.517997  280419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/kindnet-593530/config.json: {Name:mkfecc59ada6257b7b5ca7dd6401d3c0a770c055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:47.537042  280419 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:14:47.537058  280419 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:14:47.537072  280419 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:14:47.537104  280419 start.go:360] acquireMachinesLock for kindnet-593530: {Name:mk5a10a63cf9105a8ff76500eb9e482fca2462d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:14:47.537190  280419 start.go:364] duration metric: took 68.991µs to acquireMachinesLock for "kindnet-593530"
	I1109 14:14:47.537217  280419 start.go:93] Provisioning new machine with config: &{Name:kindnet-593530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-593530 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:14:47.537283  280419 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 09 14:14:07 embed-certs-273180 crio[565]: time="2025-11-09T14:14:07.882831654Z" level=info msg="Started container" PID=1745 containerID=3819b0255c2b1f9a62c9ba302f4efaf512f149dfe6dcb5f6ec27375143ca9658 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w/dashboard-metrics-scraper id=ff969f38-75d6-4471-bc61-d83aed9a7b5f name=/runtime.v1.RuntimeService/StartContainer sandboxID=01622cfc58f4ad6c7d64603c95776a0ea487b7538438ae8b97b646d0950385db
	Nov 09 14:14:08 embed-certs-273180 crio[565]: time="2025-11-09T14:14:08.845191496Z" level=info msg="Removing container: 8c767768ee151ac9d5414412223ba650d3d3c39cda0dc2df586cc6548c581e16" id=c52e55a9-39ec-4b2e-94d1-b4c83dd4729f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:14:08 embed-certs-273180 crio[565]: time="2025-11-09T14:14:08.855048122Z" level=info msg="Removed container 8c767768ee151ac9d5414412223ba650d3d3c39cda0dc2df586cc6548c581e16: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w/dashboard-metrics-scraper" id=c52e55a9-39ec-4b2e-94d1-b4c83dd4729f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.893234765Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a6c01f57-c4df-4f0a-b107-54f31bc21134 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.894223072Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bb84e6d3-fa61-4162-a303-d19c5544d8e9 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.895271882Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e9ed45d9-08b6-415d-966f-6cc450c42701 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.895422073Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.899689368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.899857586Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ffcb54bd15ef7d54a5144156abc244c618c82969ee54a2b564b14ad92f9d7d51/merged/etc/passwd: no such file or directory"
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.8998887Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ffcb54bd15ef7d54a5144156abc244c618c82969ee54a2b564b14ad92f9d7d51/merged/etc/group: no such file or directory"
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.900128704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.936464307Z" level=info msg="Created container cf92131e5324bc40f78a4c68dc30e47e8eda7809c531a9acd133ee856f3dc7ba: kube-system/storage-provisioner/storage-provisioner" id=e9ed45d9-08b6-415d-966f-6cc450c42701 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.937060891Z" level=info msg="Starting container: cf92131e5324bc40f78a4c68dc30e47e8eda7809c531a9acd133ee856f3dc7ba" id=df2d8379-77db-4dc5-950f-2fdb5560f446 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.939307159Z" level=info msg="Started container" PID=1762 containerID=cf92131e5324bc40f78a4c68dc30e47e8eda7809c531a9acd133ee856f3dc7ba description=kube-system/storage-provisioner/storage-provisioner id=df2d8379-77db-4dc5-950f-2fdb5560f446 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5ba40612b1e30a395aa91a02ad3887eff71065cfee1e28b889c0fe10c6c2e2fd
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.763962267Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=52fc8cfb-0662-4065-a3f8-c8445668e9e7 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.764852739Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=89546c76-3ca6-4d11-b052-aaacabd4c00a name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.765998447Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w/dashboard-metrics-scraper" id=92e4799a-a44e-4f5b-9f38-a2a85b478ec0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.766131329Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.772400544Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.773042318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.806468366Z" level=info msg="Created container 95f7825f27c31fe22ae58953c8d5a63a0f9a9fead69df32eceb1a491a3f162eb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w/dashboard-metrics-scraper" id=92e4799a-a44e-4f5b-9f38-a2a85b478ec0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.807078004Z" level=info msg="Starting container: 95f7825f27c31fe22ae58953c8d5a63a0f9a9fead69df32eceb1a491a3f162eb" id=0e5f6c44-5f43-45b9-9a94-686ccd3c42cd name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.808755933Z" level=info msg="Started container" PID=1777 containerID=95f7825f27c31fe22ae58953c8d5a63a0f9a9fead69df32eceb1a491a3f162eb description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w/dashboard-metrics-scraper id=0e5f6c44-5f43-45b9-9a94-686ccd3c42cd name=/runtime.v1.RuntimeService/StartContainer sandboxID=01622cfc58f4ad6c7d64603c95776a0ea487b7538438ae8b97b646d0950385db
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.897917003Z" level=info msg="Removing container: 3819b0255c2b1f9a62c9ba302f4efaf512f149dfe6dcb5f6ec27375143ca9658" id=d0fdc1f1-4b68-43b9-a506-f30ca31fcf03 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.909983893Z" level=info msg="Removed container 3819b0255c2b1f9a62c9ba302f4efaf512f149dfe6dcb5f6ec27375143ca9658: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w/dashboard-metrics-scraper" id=d0fdc1f1-4b68-43b9-a506-f30ca31fcf03 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	95f7825f27c31       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   01622cfc58f4a       dashboard-metrics-scraper-6ffb444bf9-6xx4w   kubernetes-dashboard
	cf92131e5324b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   5ba40612b1e30       storage-provisioner                          kube-system
	bb7e516a22712       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   07d39eb3dcf4a       kubernetes-dashboard-855c9754f9-p4m9s        kubernetes-dashboard
	ee6c882af6aa0       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   5a6fe4d6adb6f       busybox                                      default
	cd8e640f39808       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   1630defd31501       coredns-66bc5c9577-bbnm4                     kube-system
	961da20af9a59       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   2b09646eb7e38       kube-proxy-k6zsl                             kube-system
	595bbf89163b1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   bc4cb5d442d2b       kindnet-scgq8                                kube-system
	9e732a5e0ee32       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   5ba40612b1e30       storage-provisioner                          kube-system
	4f9f38d1a0f6c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   e5179329456f0       etcd-embed-certs-273180                      kube-system
	97f5a7b8e8b2e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   77a75056db83e       kube-scheduler-embed-certs-273180            kube-system
	976a1e86747e5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   d00c4fe19d90c       kube-controller-manager-embed-certs-273180   kube-system
	9736e800f3ad2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   5c11f706d9dff       kube-apiserver-embed-certs-273180            kube-system
	
	
	==> coredns [cd8e640f3980854681179bacf773403036d5d364e134aba45034feb324e1a5c1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53852 - 33963 "HINFO IN 5870066111307080810.5615568829944748013. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.092263267s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-273180
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-273180
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=embed-certs-273180
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_12_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:12:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-273180
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:14:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:14:37 +0000   Sun, 09 Nov 2025 14:12:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:14:37 +0000   Sun, 09 Nov 2025 14:12:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:14:37 +0000   Sun, 09 Nov 2025 14:12:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:14:37 +0000   Sun, 09 Nov 2025 14:13:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-273180
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                ca6fdff2-5006-4b63-a78c-0c296485de58
	  Boot ID:                    f61b57f8-a461-4dd6-b921-37a11bd88f0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-bbnm4                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-embed-certs-273180                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-scgq8                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-embed-certs-273180             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-embed-certs-273180    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-k6zsl                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-embed-certs-273180             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-6xx4w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-p4m9s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  111s               kubelet          Node embed-certs-273180 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s               kubelet          Node embed-certs-273180 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s               kubelet          Node embed-certs-273180 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s               node-controller  Node embed-certs-273180 event: Registered Node embed-certs-273180 in Controller
	  Normal  NodeReady                94s                kubelet          Node embed-certs-273180 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node embed-certs-273180 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node embed-certs-273180 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node embed-certs-273180 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node embed-certs-273180 event: Registered Node embed-certs-273180 in Controller
	
	
	==> dmesg <==
	[  +0.090313] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.571631] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 9 13:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.019189] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023897] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +2.047759] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +4.031604] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +8.447123] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[ +16.382306] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 13:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	
	
	==> etcd [4f9f38d1a0f6c3b90459b53a1a0308e519ef2d1e4f12c24e072989aa297eab6c] <==
	{"level":"warn","ts":"2025-11-09T14:13:55.588269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:13:55.605843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:13:55.610693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:13:55.617093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:13:55.625041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:13:55.672835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33640","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T14:13:57.986781Z","caller":"traceutil/trace.go:172","msg":"trace[1454070792] linearizableReadLoop","detail":"{readStateIndex:502; appliedIndex:502; }","duration":"124.563245ms","start":"2025-11-09T14:13:57.862190Z","end":"2025-11-09T14:13:57.986753Z","steps":["trace[1454070792] 'read index received'  (duration: 124.5548ms)","trace[1454070792] 'applied index is now lower than readState.Index'  (duration: 6.735µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:13:57.986923Z","caller":"traceutil/trace.go:172","msg":"trace[25412613] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"136.98974ms","start":"2025-11-09T14:13:57.849912Z","end":"2025-11-09T14:13:57.986902Z","steps":["trace[25412613] 'process raft request'  (duration: 136.878132ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:13:57.986937Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.703647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T14:13:57.987092Z","caller":"traceutil/trace.go:172","msg":"trace[1786821572] range","detail":"{range_begin:/registry/minions; range_end:; response_count:0; response_revision:473; }","duration":"124.899457ms","start":"2025-11-09T14:13:57.862183Z","end":"2025-11-09T14:13:57.987083Z","steps":["trace[1786821572] 'agreement among raft nodes before linearized reading'  (duration: 124.669295ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:58.166053Z","caller":"traceutil/trace.go:172","msg":"trace[90093492] transaction","detail":"{read_only:false; response_revision:476; number_of_response:1; }","duration":"161.161847ms","start":"2025-11-09T14:13:58.004869Z","end":"2025-11-09T14:13:58.166031Z","steps":["trace[90093492] 'process raft request'  (duration: 125.327356ms)","trace[90093492] 'compare'  (duration: 35.726729ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:13:58.421128Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.071537ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/validatingadmissionpolicy-status-controller\" limit:1 ","response":"range_response_count:1 size:252"}
	{"level":"info","ts":"2025-11-09T14:13:58.421186Z","caller":"traceutil/trace.go:172","msg":"trace[2075475489] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/validatingadmissionpolicy-status-controller; range_end:; response_count:1; response_revision:477; }","duration":"148.162592ms","start":"2025-11-09T14:13:58.273014Z","end":"2025-11-09T14:13:58.421177Z","steps":["trace[2075475489] 'range keys from in-memory index tree'  (duration: 147.949471ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:59.720428Z","caller":"traceutil/trace.go:172","msg":"trace[1297080510] linearizableReadLoop","detail":"{readStateIndex:507; appliedIndex:507; }","duration":"101.127303ms","start":"2025-11-09T14:13:59.619277Z","end":"2025-11-09T14:13:59.720404Z","steps":["trace[1297080510] 'read index received'  (duration: 101.119896ms)","trace[1297080510] 'applied index is now lower than readState.Index'  (duration: 5.751µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:13:59.724047Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.750858ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-09T14:13:59.724104Z","caller":"traceutil/trace.go:172","msg":"trace[304818503] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:477; }","duration":"104.818926ms","start":"2025-11-09T14:13:59.619274Z","end":"2025-11-09T14:13:59.724093Z","steps":["trace[304818503] 'agreement among raft nodes before linearized reading'  (duration: 101.23885ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:59.724105Z","caller":"traceutil/trace.go:172","msg":"trace[1579320021] transaction","detail":"{read_only:false; response_revision:478; number_of_response:1; }","duration":"129.916096ms","start":"2025-11-09T14:13:59.594164Z","end":"2025-11-09T14:13:59.724080Z","steps":["trace[1579320021] 'process raft request'  (duration: 126.275023ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:13:59.724166Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.170509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:2895"}
	{"level":"warn","ts":"2025-11-09T14:13:59.724196Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.141984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4437"}
	{"level":"info","ts":"2025-11-09T14:13:59.724199Z","caller":"traceutil/trace.go:172","msg":"trace[1912907246] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:478; }","duration":"101.228446ms","start":"2025-11-09T14:13:59.622963Z","end":"2025-11-09T14:13:59.724191Z","steps":["trace[1912907246] 'agreement among raft nodes before linearized reading'  (duration: 101.108034ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:59.724231Z","caller":"traceutil/trace.go:172","msg":"trace[802543164] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:478; }","duration":"101.181849ms","start":"2025-11-09T14:13:59.623040Z","end":"2025-11-09T14:13:59.724222Z","steps":["trace[802543164] 'agreement among raft nodes before linearized reading'  (duration: 101.0593ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:13:59.724403Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.393947ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/kube-dns\" limit:1 ","response":"range_response_count:1 size:1211"}
	{"level":"info","ts":"2025-11-09T14:13:59.724434Z","caller":"traceutil/trace.go:172","msg":"trace[1604086386] range","detail":"{range_begin:/registry/services/specs/kube-system/kube-dns; range_end:; response_count:1; response_revision:478; }","duration":"101.426154ms","start":"2025-11-09T14:13:59.622998Z","end":"2025-11-09T14:13:59.724425Z","steps":["trace[1604086386] 'agreement among raft nodes before linearized reading'  (duration: 101.356307ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:13:59.724436Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.434883ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" limit:1 ","response":"range_response_count:1 size:4149"}
	{"level":"info","ts":"2025-11-09T14:13:59.724470Z","caller":"traceutil/trace.go:172","msg":"trace[1841103548] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-66bc5c9577; range_end:; response_count:1; response_revision:478; }","duration":"101.470212ms","start":"2025-11-09T14:13:59.622989Z","end":"2025-11-09T14:13:59.724459Z","steps":["trace[1841103548] 'agreement among raft nodes before linearized reading'  (duration: 101.371842ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:14:49 up 57 min,  0 user,  load average: 5.01, 3.65, 2.22
	Linux embed-certs-273180 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [595bbf89163b18285245a51e9a1992b62f9126f3823fbe528329a26e1311df9d] <==
	I1109 14:13:57.327920       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:13:57.328214       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1109 14:13:57.328419       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:13:57.328443       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:13:57.328465       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:13:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:13:57.587985       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:13:57.630234       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:13:57.630250       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:13:57.630389       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1109 14:13:57.830958       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:13:57.830986       1 metrics.go:72] Registering metrics
	I1109 14:13:57.831030       1 controller.go:711] "Syncing nftables rules"
	I1109 14:14:07.587703       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1109 14:14:07.587780       1 main.go:301] handling current node
	I1109 14:14:17.591206       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1109 14:14:17.591241       1 main.go:301] handling current node
	I1109 14:14:27.587520       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1109 14:14:27.587575       1 main.go:301] handling current node
	I1109 14:14:37.588726       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1109 14:14:37.588755       1 main.go:301] handling current node
	I1109 14:14:47.595717       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1109 14:14:47.595754       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9736e800f3ad26c7d4d7a6c889abcad2a30ef0f3907128567e28dbcdd9a9355e] <==
	I1109 14:13:56.287896       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1109 14:13:56.288319       1 aggregator.go:171] initial CRD sync complete...
	I1109 14:13:56.288348       1 autoregister_controller.go:144] Starting autoregister controller
	I1109 14:13:56.288373       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:13:56.288396       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:13:56.287635       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1109 14:13:56.287193       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1109 14:13:56.287224       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1109 14:13:56.292225       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1109 14:13:56.299042       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1109 14:13:56.319455       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:13:56.335112       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:13:56.352845       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1109 14:13:56.571607       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:13:56.601137       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:13:56.618100       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:13:56.624632       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:13:56.634884       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:13:56.667241       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.222.159"}
	I1109 14:13:56.684566       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.157.22"}
	I1109 14:13:57.186693       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:13:59.975579       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:14:00.176611       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:14:00.176611       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:14:00.224948       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [976a1e86747e59d5a7c8cdbc6eaef9d6d0fde3a08e20706cee6160921ddf6689] <==
	I1109 14:13:59.621725       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1109 14:13:59.621798       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:13:59.621815       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1109 14:13:59.621822       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1109 14:13:59.621817       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:13:59.621831       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 14:13:59.624184       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1109 14:13:59.625360       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1109 14:13:59.625382       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1109 14:13:59.625407       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1109 14:13:59.625427       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1109 14:13:59.625447       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1109 14:13:59.625449       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:13:59.625490       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1109 14:13:59.625521       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1109 14:13:59.625527       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1109 14:13:59.625532       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1109 14:13:59.627831       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1109 14:13:59.632091       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1109 14:13:59.633286       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1109 14:13:59.637541       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:13:59.640721       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1109 14:13:59.640744       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1109 14:13:59.642914       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:14:00.179330       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [961da20af9a59ad0af4140da7675b9435e7dcff1b6393a569154a81aa4bfb681] <==
	I1109 14:13:57.156260       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:13:57.233862       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:13:57.334046       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:13:57.334086       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1109 14:13:57.334202       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:13:57.356592       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:13:57.356655       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:13:57.363108       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:13:57.363633       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:13:57.363687       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:13:57.365835       1 config.go:200] "Starting service config controller"
	I1109 14:13:57.365851       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:13:57.365878       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:13:57.365884       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:13:57.365934       1 config.go:309] "Starting node config controller"
	I1109 14:13:57.365975       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:13:57.366007       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:13:57.366369       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:13:57.366384       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:13:57.466548       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:13:57.466560       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:13:57.466586       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [97f5a7b8e8b2ec193df908b13853b3f0d95619f6cc39fc3c693bf5f008f98071] <==
	I1109 14:13:54.662228       1 serving.go:386] Generated self-signed cert in-memory
	W1109 14:13:56.254096       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 14:13:56.254137       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 14:13:56.254149       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 14:13:56.254159       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 14:13:56.276801       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 14:13:56.276834       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:13:56.280063       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:13:56.280109       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:13:56.280856       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:13:56.281088       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 14:13:56.381359       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:14:00 embed-certs-273180 kubelet[718]: I1109 14:14:00.053042     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrfds\" (UniqueName: \"kubernetes.io/projected/65d8f2ce-1bc0-4c22-8527-78d217576a5f-kube-api-access-hrfds\") pod \"dashboard-metrics-scraper-6ffb444bf9-6xx4w\" (UID: \"65d8f2ce-1bc0-4c22-8527-78d217576a5f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w"
	Nov 09 14:14:00 embed-certs-273180 kubelet[718]: I1109 14:14:00.053076     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/65d8f2ce-1bc0-4c22-8527-78d217576a5f-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-6xx4w\" (UID: \"65d8f2ce-1bc0-4c22-8527-78d217576a5f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w"
	Nov 09 14:14:02 embed-certs-273180 kubelet[718]: I1109 14:14:02.010461     718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 09 14:14:04 embed-certs-273180 kubelet[718]: I1109 14:14:04.844566     718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-p4m9s" podStartSLOduration=1.174323397 podStartE2EDuration="4.844532273s" podCreationTimestamp="2025-11-09 14:14:00 +0000 UTC" firstStartedPulling="2025-11-09 14:14:00.375175009 +0000 UTC m=+6.723276310" lastFinishedPulling="2025-11-09 14:14:04.045383894 +0000 UTC m=+10.393485186" observedRunningTime="2025-11-09 14:14:04.844035562 +0000 UTC m=+11.192136865" watchObservedRunningTime="2025-11-09 14:14:04.844532273 +0000 UTC m=+11.192633578"
	Nov 09 14:14:07 embed-certs-273180 kubelet[718]: I1109 14:14:07.839534     718 scope.go:117] "RemoveContainer" containerID="8c767768ee151ac9d5414412223ba650d3d3c39cda0dc2df586cc6548c581e16"
	Nov 09 14:14:08 embed-certs-273180 kubelet[718]: I1109 14:14:08.843865     718 scope.go:117] "RemoveContainer" containerID="8c767768ee151ac9d5414412223ba650d3d3c39cda0dc2df586cc6548c581e16"
	Nov 09 14:14:08 embed-certs-273180 kubelet[718]: I1109 14:14:08.844029     718 scope.go:117] "RemoveContainer" containerID="3819b0255c2b1f9a62c9ba302f4efaf512f149dfe6dcb5f6ec27375143ca9658"
	Nov 09 14:14:08 embed-certs-273180 kubelet[718]: E1109 14:14:08.844228     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6xx4w_kubernetes-dashboard(65d8f2ce-1bc0-4c22-8527-78d217576a5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w" podUID="65d8f2ce-1bc0-4c22-8527-78d217576a5f"
	Nov 09 14:14:09 embed-certs-273180 kubelet[718]: I1109 14:14:09.849125     718 scope.go:117] "RemoveContainer" containerID="3819b0255c2b1f9a62c9ba302f4efaf512f149dfe6dcb5f6ec27375143ca9658"
	Nov 09 14:14:09 embed-certs-273180 kubelet[718]: E1109 14:14:09.849330     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6xx4w_kubernetes-dashboard(65d8f2ce-1bc0-4c22-8527-78d217576a5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w" podUID="65d8f2ce-1bc0-4c22-8527-78d217576a5f"
	Nov 09 14:14:15 embed-certs-273180 kubelet[718]: I1109 14:14:15.524323     718 scope.go:117] "RemoveContainer" containerID="3819b0255c2b1f9a62c9ba302f4efaf512f149dfe6dcb5f6ec27375143ca9658"
	Nov 09 14:14:15 embed-certs-273180 kubelet[718]: E1109 14:14:15.524545     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6xx4w_kubernetes-dashboard(65d8f2ce-1bc0-4c22-8527-78d217576a5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w" podUID="65d8f2ce-1bc0-4c22-8527-78d217576a5f"
	Nov 09 14:14:27 embed-certs-273180 kubelet[718]: I1109 14:14:27.892862     718 scope.go:117] "RemoveContainer" containerID="9e732a5e0ee324731917ec205a3d4fb92d4e86259ec8e5cb6ae0474e3dfee477"
	Nov 09 14:14:28 embed-certs-273180 kubelet[718]: I1109 14:14:28.763435     718 scope.go:117] "RemoveContainer" containerID="3819b0255c2b1f9a62c9ba302f4efaf512f149dfe6dcb5f6ec27375143ca9658"
	Nov 09 14:14:28 embed-certs-273180 kubelet[718]: I1109 14:14:28.896682     718 scope.go:117] "RemoveContainer" containerID="3819b0255c2b1f9a62c9ba302f4efaf512f149dfe6dcb5f6ec27375143ca9658"
	Nov 09 14:14:28 embed-certs-273180 kubelet[718]: I1109 14:14:28.896888     718 scope.go:117] "RemoveContainer" containerID="95f7825f27c31fe22ae58953c8d5a63a0f9a9fead69df32eceb1a491a3f162eb"
	Nov 09 14:14:28 embed-certs-273180 kubelet[718]: E1109 14:14:28.897092     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6xx4w_kubernetes-dashboard(65d8f2ce-1bc0-4c22-8527-78d217576a5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w" podUID="65d8f2ce-1bc0-4c22-8527-78d217576a5f"
	Nov 09 14:14:35 embed-certs-273180 kubelet[718]: I1109 14:14:35.524077     718 scope.go:117] "RemoveContainer" containerID="95f7825f27c31fe22ae58953c8d5a63a0f9a9fead69df32eceb1a491a3f162eb"
	Nov 09 14:14:35 embed-certs-273180 kubelet[718]: E1109 14:14:35.524286     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6xx4w_kubernetes-dashboard(65d8f2ce-1bc0-4c22-8527-78d217576a5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w" podUID="65d8f2ce-1bc0-4c22-8527-78d217576a5f"
	Nov 09 14:14:45 embed-certs-273180 kubelet[718]: I1109 14:14:45.763293     718 scope.go:117] "RemoveContainer" containerID="95f7825f27c31fe22ae58953c8d5a63a0f9a9fead69df32eceb1a491a3f162eb"
	Nov 09 14:14:45 embed-certs-273180 kubelet[718]: E1109 14:14:45.763571     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6xx4w_kubernetes-dashboard(65d8f2ce-1bc0-4c22-8527-78d217576a5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w" podUID="65d8f2ce-1bc0-4c22-8527-78d217576a5f"
	Nov 09 14:14:47 embed-certs-273180 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:14:47 embed-certs-273180 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:14:47 embed-certs-273180 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 09 14:14:47 embed-certs-273180 systemd[1]: kubelet.service: Consumed 1.583s CPU time.
	
	
	==> kubernetes-dashboard [bb7e516a22712b6d157a69c73f6034857933d87290f12deb79ba4439103faeda] <==
	2025/11/09 14:14:04 Starting overwatch
	2025/11/09 14:14:04 Using namespace: kubernetes-dashboard
	2025/11/09 14:14:04 Using in-cluster config to connect to apiserver
	2025/11/09 14:14:04 Using secret token for csrf signing
	2025/11/09 14:14:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/09 14:14:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/09 14:14:04 Successful initial request to the apiserver, version: v1.34.1
	2025/11/09 14:14:04 Generating JWE encryption key
	2025/11/09 14:14:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/09 14:14:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/09 14:14:04 Initializing JWE encryption key from synchronized object
	2025/11/09 14:14:04 Creating in-cluster Sidecar client
	2025/11/09 14:14:04 Serving insecurely on HTTP port: 9090
	2025/11/09 14:14:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:14:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [9e732a5e0ee324731917ec205a3d4fb92d4e86259ec8e5cb6ae0474e3dfee477] <==
	I1109 14:13:57.117356       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1109 14:14:27.120910       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cf92131e5324bc40f78a4c68dc30e47e8eda7809c531a9acd133ee856f3dc7ba] <==
	I1109 14:14:27.953289       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:14:27.961894       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:14:27.961944       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 14:14:27.964064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:31.419392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:35.679863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:39.278262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:42.332005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:45.354103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:45.359274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:14:45.359413       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:14:45.359491       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ee6b9ad1-7e0f-4b6d-8696-e4410f1b9328", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-273180_7abb159b-6632-488b-91a1-fd07d842913d became leader
	I1109 14:14:45.359530       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-273180_7abb159b-6632-488b-91a1-fd07d842913d!
	W1109 14:14:45.361356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:45.364285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:14:45.459747       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-273180_7abb159b-6632-488b-91a1-fd07d842913d!
	W1109 14:14:47.368373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:47.374156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:49.378518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:49.382748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-273180 -n embed-certs-273180
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-273180 -n embed-certs-273180: exit status 2 (328.102413ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-273180 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-273180
helpers_test.go:243: (dbg) docker inspect embed-certs-273180:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7",
	        "Created": "2025-11-09T14:12:40.11425745Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 264752,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:13:47.476282214Z",
	            "FinishedAt": "2025-11-09T14:13:44.233721916Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7/hostname",
	        "HostsPath": "/var/lib/docker/containers/da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7/hosts",
	        "LogPath": "/var/lib/docker/containers/da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7/da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7-json.log",
	        "Name": "/embed-certs-273180",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-273180:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-273180",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "da002f6826efdd71e006c700516afb3444851389a7f274892a27b483ba4f75f7",
	                "LowerDir": "/var/lib/docker/overlay2/c9602747b4dc7cd2b99d04208211d49870115e57ef519393f716f3f462a836b0-init/diff:/var/lib/docker/overlay2/d0c6d4b4ffbfb6af64ec3408030fc54111fe30d500088a5ae947d598cfc72f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c9602747b4dc7cd2b99d04208211d49870115e57ef519393f716f3f462a836b0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c9602747b4dc7cd2b99d04208211d49870115e57ef519393f716f3f462a836b0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c9602747b4dc7cd2b99d04208211d49870115e57ef519393f716f3f462a836b0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-273180",
	                "Source": "/var/lib/docker/volumes/embed-certs-273180/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-273180",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-273180",
	                "name.minikube.sigs.k8s.io": "embed-certs-273180",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0263ac70a60688635adb3eb89232838d3ba9f3c55e6e358f3c8c902be4c1ee68",
	            "SandboxKey": "/var/run/docker/netns/0263ac70a606",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-273180": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:af:0a:9d:90:9f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e4394163f33a23d3fe460b68d1b70efd91c45ded0aedfe59220d7876ad042ed",
	                    "EndpointID": "66188b03009b701ae808b83780b139dcfa1b7c2126e6b79dcf4da6a98492d40f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-273180",
	                        "da002f6826ef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-273180 -n embed-certs-273180
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-273180 -n embed-certs-273180: exit status 2 (310.037472ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-273180 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-273180 logs -n 25: (2.437759291s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p old-k8s-version-169816 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ delete  │ -p old-k8s-version-169816                                                                                                                                                                                                                     │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p old-k8s-version-169816                                                                                                                                                                                                                     │ old-k8s-version-169816       │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p newest-cni-331530 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:14 UTC │
	│ image   │ no-preload-152932 image list --format=json                                                                                                                                                                                                    │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ pause   │ -p no-preload-152932 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-273180 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p embed-certs-273180 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:14 UTC │
	│ delete  │ -p no-preload-152932                                                                                                                                                                                                                          │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ delete  │ -p no-preload-152932                                                                                                                                                                                                                          │ no-preload-152932            │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:13 UTC │
	│ start   │ -p auto-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:13 UTC │ 09 Nov 25 14:14 UTC │
	│ addons  │ enable metrics-server -p newest-cni-331530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	│ stop    │ -p newest-cni-331530 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ addons  │ enable dashboard -p newest-cni-331530 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ start   │ -p newest-cni-331530 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ image   │ newest-cni-331530 image list --format=json                                                                                                                                                                                                    │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ pause   │ -p newest-cni-331530 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-326524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	│ ssh     │ -p auto-593530 pgrep -a kubelet                                                                                                                                                                                                               │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ stop    │ -p default-k8s-diff-port-326524 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	│ delete  │ -p newest-cni-331530                                                                                                                                                                                                                          │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ image   │ embed-certs-273180 image list --format=json                                                                                                                                                                                                   │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ pause   │ -p embed-certs-273180 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-273180           │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	│ delete  │ -p newest-cni-331530                                                                                                                                                                                                                          │ newest-cni-331530            │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │ 09 Nov 25 14:14 UTC │
	│ start   │ -p kindnet-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-593530               │ jenkins │ v1.37.0 │ 09 Nov 25 14:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:14:47
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:14:47.346520  280419 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:14:47.346614  280419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:14:47.346621  280419 out.go:374] Setting ErrFile to fd 2...
	I1109 14:14:47.346625  280419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:14:47.346844  280419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:14:47.347284  280419 out.go:368] Setting JSON to false
	I1109 14:14:47.348369  280419 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3437,"bootTime":1762694250,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:14:47.348419  280419 start.go:143] virtualization: kvm guest
	I1109 14:14:47.350006  280419 out.go:179] * [kindnet-593530] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:14:47.351163  280419 notify.go:221] Checking for updates...
	I1109 14:14:47.351204  280419 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:14:47.352960  280419 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:14:47.354323  280419 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:14:47.355350  280419 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 14:14:47.356354  280419 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:14:47.357300  280419 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:14:47.358548  280419 config.go:182] Loaded profile config "auto-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:14:47.358666  280419 config.go:182] Loaded profile config "default-k8s-diff-port-326524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:14:47.358762  280419 config.go:182] Loaded profile config "embed-certs-273180": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:14:47.358866  280419 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:14:47.384154  280419 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 14:14:47.384236  280419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:14:47.438024  280419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:83 SystemTime:2025-11-09 14:14:47.428812615 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:14:47.438185  280419 docker.go:319] overlay module found
	I1109 14:14:47.439630  280419 out.go:179] * Using the docker driver based on user configuration
	I1109 14:14:47.440799  280419 start.go:309] selected driver: docker
	I1109 14:14:47.440817  280419 start.go:930] validating driver "docker" against <nil>
	I1109 14:14:47.440831  280419 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:14:47.441553  280419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:14:47.509158  280419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:83 SystemTime:2025-11-09 14:14:47.493492143 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:14:47.509448  280419 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 14:14:47.509777  280419 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:14:47.511618  280419 out.go:179] * Using Docker driver with root privileges
	I1109 14:14:47.512869  280419 cni.go:84] Creating CNI manager for "kindnet"
	I1109 14:14:47.512885  280419 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 14:14:47.512955  280419 start.go:353] cluster config:
	{Name:kindnet-593530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-593530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:14:47.514136  280419 out.go:179] * Starting "kindnet-593530" primary control-plane node in "kindnet-593530" cluster
	I1109 14:14:47.515498  280419 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:14:47.516666  280419 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:14:47.517769  280419 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:14:47.517803  280419 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 14:14:47.517812  280419 cache.go:65] Caching tarball of preloaded images
	I1109 14:14:47.517873  280419 preload.go:238] Found /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 14:14:47.517875  280419 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:14:47.517883  280419 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:14:47.517974  280419 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/kindnet-593530/config.json ...
	I1109 14:14:47.517997  280419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/kindnet-593530/config.json: {Name:mkfecc59ada6257b7b5ca7dd6401d3c0a770c055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:14:47.537042  280419 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:14:47.537058  280419 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:14:47.537072  280419 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:14:47.537104  280419 start.go:360] acquireMachinesLock for kindnet-593530: {Name:mk5a10a63cf9105a8ff76500eb9e482fca2462d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:14:47.537190  280419 start.go:364] duration metric: took 68.991µs to acquireMachinesLock for "kindnet-593530"
	I1109 14:14:47.537217  280419 start.go:93] Provisioning new machine with config: &{Name:kindnet-593530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-593530 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:14:47.537283  280419 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 09 14:14:07 embed-certs-273180 crio[565]: time="2025-11-09T14:14:07.882831654Z" level=info msg="Started container" PID=1745 containerID=3819b0255c2b1f9a62c9ba302f4efaf512f149dfe6dcb5f6ec27375143ca9658 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w/dashboard-metrics-scraper id=ff969f38-75d6-4471-bc61-d83aed9a7b5f name=/runtime.v1.RuntimeService/StartContainer sandboxID=01622cfc58f4ad6c7d64603c95776a0ea487b7538438ae8b97b646d0950385db
	Nov 09 14:14:08 embed-certs-273180 crio[565]: time="2025-11-09T14:14:08.845191496Z" level=info msg="Removing container: 8c767768ee151ac9d5414412223ba650d3d3c39cda0dc2df586cc6548c581e16" id=c52e55a9-39ec-4b2e-94d1-b4c83dd4729f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:14:08 embed-certs-273180 crio[565]: time="2025-11-09T14:14:08.855048122Z" level=info msg="Removed container 8c767768ee151ac9d5414412223ba650d3d3c39cda0dc2df586cc6548c581e16: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w/dashboard-metrics-scraper" id=c52e55a9-39ec-4b2e-94d1-b4c83dd4729f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.893234765Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a6c01f57-c4df-4f0a-b107-54f31bc21134 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.894223072Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bb84e6d3-fa61-4162-a303-d19c5544d8e9 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.895271882Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e9ed45d9-08b6-415d-966f-6cc450c42701 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.895422073Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.899689368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.899857586Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ffcb54bd15ef7d54a5144156abc244c618c82969ee54a2b564b14ad92f9d7d51/merged/etc/passwd: no such file or directory"
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.8998887Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ffcb54bd15ef7d54a5144156abc244c618c82969ee54a2b564b14ad92f9d7d51/merged/etc/group: no such file or directory"
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.900128704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.936464307Z" level=info msg="Created container cf92131e5324bc40f78a4c68dc30e47e8eda7809c531a9acd133ee856f3dc7ba: kube-system/storage-provisioner/storage-provisioner" id=e9ed45d9-08b6-415d-966f-6cc450c42701 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.937060891Z" level=info msg="Starting container: cf92131e5324bc40f78a4c68dc30e47e8eda7809c531a9acd133ee856f3dc7ba" id=df2d8379-77db-4dc5-950f-2fdb5560f446 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:14:27 embed-certs-273180 crio[565]: time="2025-11-09T14:14:27.939307159Z" level=info msg="Started container" PID=1762 containerID=cf92131e5324bc40f78a4c68dc30e47e8eda7809c531a9acd133ee856f3dc7ba description=kube-system/storage-provisioner/storage-provisioner id=df2d8379-77db-4dc5-950f-2fdb5560f446 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5ba40612b1e30a395aa91a02ad3887eff71065cfee1e28b889c0fe10c6c2e2fd
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.763962267Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=52fc8cfb-0662-4065-a3f8-c8445668e9e7 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.764852739Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=89546c76-3ca6-4d11-b052-aaacabd4c00a name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.765998447Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w/dashboard-metrics-scraper" id=92e4799a-a44e-4f5b-9f38-a2a85b478ec0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.766131329Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.772400544Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.773042318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.806468366Z" level=info msg="Created container 95f7825f27c31fe22ae58953c8d5a63a0f9a9fead69df32eceb1a491a3f162eb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w/dashboard-metrics-scraper" id=92e4799a-a44e-4f5b-9f38-a2a85b478ec0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.807078004Z" level=info msg="Starting container: 95f7825f27c31fe22ae58953c8d5a63a0f9a9fead69df32eceb1a491a3f162eb" id=0e5f6c44-5f43-45b9-9a94-686ccd3c42cd name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.808755933Z" level=info msg="Started container" PID=1777 containerID=95f7825f27c31fe22ae58953c8d5a63a0f9a9fead69df32eceb1a491a3f162eb description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w/dashboard-metrics-scraper id=0e5f6c44-5f43-45b9-9a94-686ccd3c42cd name=/runtime.v1.RuntimeService/StartContainer sandboxID=01622cfc58f4ad6c7d64603c95776a0ea487b7538438ae8b97b646d0950385db
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.897917003Z" level=info msg="Removing container: 3819b0255c2b1f9a62c9ba302f4efaf512f149dfe6dcb5f6ec27375143ca9658" id=d0fdc1f1-4b68-43b9-a506-f30ca31fcf03 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:14:28 embed-certs-273180 crio[565]: time="2025-11-09T14:14:28.909983893Z" level=info msg="Removed container 3819b0255c2b1f9a62c9ba302f4efaf512f149dfe6dcb5f6ec27375143ca9658: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w/dashboard-metrics-scraper" id=d0fdc1f1-4b68-43b9-a506-f30ca31fcf03 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	95f7825f27c31       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   01622cfc58f4a       dashboard-metrics-scraper-6ffb444bf9-6xx4w   kubernetes-dashboard
	cf92131e5324b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   5ba40612b1e30       storage-provisioner                          kube-system
	bb7e516a22712       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago      Running             kubernetes-dashboard        0                   07d39eb3dcf4a       kubernetes-dashboard-855c9754f9-p4m9s        kubernetes-dashboard
	ee6c882af6aa0       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   5a6fe4d6adb6f       busybox                                      default
	cd8e640f39808       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   1630defd31501       coredns-66bc5c9577-bbnm4                     kube-system
	961da20af9a59       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   2b09646eb7e38       kube-proxy-k6zsl                             kube-system
	595bbf89163b1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   bc4cb5d442d2b       kindnet-scgq8                                kube-system
	9e732a5e0ee32       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   5ba40612b1e30       storage-provisioner                          kube-system
	4f9f38d1a0f6c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   e5179329456f0       etcd-embed-certs-273180                      kube-system
	97f5a7b8e8b2e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   77a75056db83e       kube-scheduler-embed-certs-273180            kube-system
	976a1e86747e5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   d00c4fe19d90c       kube-controller-manager-embed-certs-273180   kube-system
	9736e800f3ad2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   5c11f706d9dff       kube-apiserver-embed-certs-273180            kube-system
	
	
	==> coredns [cd8e640f3980854681179bacf773403036d5d364e134aba45034feb324e1a5c1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53852 - 33963 "HINFO IN 5870066111307080810.5615568829944748013. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.092263267s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-273180
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-273180
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=embed-certs-273180
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_12_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:12:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-273180
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:14:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:14:37 +0000   Sun, 09 Nov 2025 14:12:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:14:37 +0000   Sun, 09 Nov 2025 14:12:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:14:37 +0000   Sun, 09 Nov 2025 14:12:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:14:37 +0000   Sun, 09 Nov 2025 14:13:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-273180
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                ca6fdff2-5006-4b63-a78c-0c296485de58
	  Boot ID:                    f61b57f8-a461-4dd6-b921-37a11bd88f0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-bbnm4                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-embed-certs-273180                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-scgq8                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-embed-certs-273180             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-embed-certs-273180    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-k6zsl                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-embed-certs-273180             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-6xx4w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-p4m9s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node embed-certs-273180 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node embed-certs-273180 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node embed-certs-273180 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node embed-certs-273180 event: Registered Node embed-certs-273180 in Controller
	  Normal  NodeReady                97s                kubelet          Node embed-certs-273180 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node embed-certs-273180 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node embed-certs-273180 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node embed-certs-273180 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node embed-certs-273180 event: Registered Node embed-certs-273180 in Controller
	
	
	==> dmesg <==
	[  +0.090313] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.571631] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 9 13:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.019189] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023897] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +2.047759] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +4.031604] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +8.447123] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[ +16.382306] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 13:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	
	
	==> etcd [4f9f38d1a0f6c3b90459b53a1a0308e519ef2d1e4f12c24e072989aa297eab6c] <==
	{"level":"warn","ts":"2025-11-09T14:13:55.588269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:13:55.605843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:13:55.610693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:13:55.617093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:13:55.625041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:13:55.672835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33640","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T14:13:57.986781Z","caller":"traceutil/trace.go:172","msg":"trace[1454070792] linearizableReadLoop","detail":"{readStateIndex:502; appliedIndex:502; }","duration":"124.563245ms","start":"2025-11-09T14:13:57.862190Z","end":"2025-11-09T14:13:57.986753Z","steps":["trace[1454070792] 'read index received'  (duration: 124.5548ms)","trace[1454070792] 'applied index is now lower than readState.Index'  (duration: 6.735µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:13:57.986923Z","caller":"traceutil/trace.go:172","msg":"trace[25412613] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"136.98974ms","start":"2025-11-09T14:13:57.849912Z","end":"2025-11-09T14:13:57.986902Z","steps":["trace[25412613] 'process raft request'  (duration: 136.878132ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:13:57.986937Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.703647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T14:13:57.987092Z","caller":"traceutil/trace.go:172","msg":"trace[1786821572] range","detail":"{range_begin:/registry/minions; range_end:; response_count:0; response_revision:473; }","duration":"124.899457ms","start":"2025-11-09T14:13:57.862183Z","end":"2025-11-09T14:13:57.987083Z","steps":["trace[1786821572] 'agreement among raft nodes before linearized reading'  (duration: 124.669295ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:58.166053Z","caller":"traceutil/trace.go:172","msg":"trace[90093492] transaction","detail":"{read_only:false; response_revision:476; number_of_response:1; }","duration":"161.161847ms","start":"2025-11-09T14:13:58.004869Z","end":"2025-11-09T14:13:58.166031Z","steps":["trace[90093492] 'process raft request'  (duration: 125.327356ms)","trace[90093492] 'compare'  (duration: 35.726729ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:13:58.421128Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.071537ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/validatingadmissionpolicy-status-controller\" limit:1 ","response":"range_response_count:1 size:252"}
	{"level":"info","ts":"2025-11-09T14:13:58.421186Z","caller":"traceutil/trace.go:172","msg":"trace[2075475489] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/validatingadmissionpolicy-status-controller; range_end:; response_count:1; response_revision:477; }","duration":"148.162592ms","start":"2025-11-09T14:13:58.273014Z","end":"2025-11-09T14:13:58.421177Z","steps":["trace[2075475489] 'range keys from in-memory index tree'  (duration: 147.949471ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:59.720428Z","caller":"traceutil/trace.go:172","msg":"trace[1297080510] linearizableReadLoop","detail":"{readStateIndex:507; appliedIndex:507; }","duration":"101.127303ms","start":"2025-11-09T14:13:59.619277Z","end":"2025-11-09T14:13:59.720404Z","steps":["trace[1297080510] 'read index received'  (duration: 101.119896ms)","trace[1297080510] 'applied index is now lower than readState.Index'  (duration: 5.751µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:13:59.724047Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.750858ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-09T14:13:59.724104Z","caller":"traceutil/trace.go:172","msg":"trace[304818503] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:477; }","duration":"104.818926ms","start":"2025-11-09T14:13:59.619274Z","end":"2025-11-09T14:13:59.724093Z","steps":["trace[304818503] 'agreement among raft nodes before linearized reading'  (duration: 101.23885ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:59.724105Z","caller":"traceutil/trace.go:172","msg":"trace[1579320021] transaction","detail":"{read_only:false; response_revision:478; number_of_response:1; }","duration":"129.916096ms","start":"2025-11-09T14:13:59.594164Z","end":"2025-11-09T14:13:59.724080Z","steps":["trace[1579320021] 'process raft request'  (duration: 126.275023ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:13:59.724166Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.170509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:2895"}
	{"level":"warn","ts":"2025-11-09T14:13:59.724196Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.141984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4437"}
	{"level":"info","ts":"2025-11-09T14:13:59.724199Z","caller":"traceutil/trace.go:172","msg":"trace[1912907246] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:478; }","duration":"101.228446ms","start":"2025-11-09T14:13:59.622963Z","end":"2025-11-09T14:13:59.724191Z","steps":["trace[1912907246] 'agreement among raft nodes before linearized reading'  (duration: 101.108034ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:13:59.724231Z","caller":"traceutil/trace.go:172","msg":"trace[802543164] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:478; }","duration":"101.181849ms","start":"2025-11-09T14:13:59.623040Z","end":"2025-11-09T14:13:59.724222Z","steps":["trace[802543164] 'agreement among raft nodes before linearized reading'  (duration: 101.0593ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:13:59.724403Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.393947ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/kube-dns\" limit:1 ","response":"range_response_count:1 size:1211"}
	{"level":"info","ts":"2025-11-09T14:13:59.724434Z","caller":"traceutil/trace.go:172","msg":"trace[1604086386] range","detail":"{range_begin:/registry/services/specs/kube-system/kube-dns; range_end:; response_count:1; response_revision:478; }","duration":"101.426154ms","start":"2025-11-09T14:13:59.622998Z","end":"2025-11-09T14:13:59.724425Z","steps":["trace[1604086386] 'agreement among raft nodes before linearized reading'  (duration: 101.356307ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:13:59.724436Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.434883ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" limit:1 ","response":"range_response_count:1 size:4149"}
	{"level":"info","ts":"2025-11-09T14:13:59.724470Z","caller":"traceutil/trace.go:172","msg":"trace[1841103548] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-66bc5c9577; range_end:; response_count:1; response_revision:478; }","duration":"101.470212ms","start":"2025-11-09T14:13:59.622989Z","end":"2025-11-09T14:13:59.724459Z","steps":["trace[1841103548] 'agreement among raft nodes before linearized reading'  (duration: 101.371842ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:14:52 up 57 min,  0 user,  load average: 4.92, 3.66, 2.23
	Linux embed-certs-273180 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [595bbf89163b18285245a51e9a1992b62f9126f3823fbe528329a26e1311df9d] <==
	I1109 14:13:57.327920       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:13:57.328214       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1109 14:13:57.328419       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:13:57.328443       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:13:57.328465       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:13:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:13:57.587985       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:13:57.630234       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:13:57.630250       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:13:57.630389       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1109 14:13:57.830958       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:13:57.830986       1 metrics.go:72] Registering metrics
	I1109 14:13:57.831030       1 controller.go:711] "Syncing nftables rules"
	I1109 14:14:07.587703       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1109 14:14:07.587780       1 main.go:301] handling current node
	I1109 14:14:17.591206       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1109 14:14:17.591241       1 main.go:301] handling current node
	I1109 14:14:27.587520       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1109 14:14:27.587575       1 main.go:301] handling current node
	I1109 14:14:37.588726       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1109 14:14:37.588755       1 main.go:301] handling current node
	I1109 14:14:47.595717       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1109 14:14:47.595754       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9736e800f3ad26c7d4d7a6c889abcad2a30ef0f3907128567e28dbcdd9a9355e] <==
	I1109 14:13:56.287896       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1109 14:13:56.288319       1 aggregator.go:171] initial CRD sync complete...
	I1109 14:13:56.288348       1 autoregister_controller.go:144] Starting autoregister controller
	I1109 14:13:56.288373       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:13:56.288396       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:13:56.287635       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1109 14:13:56.287193       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1109 14:13:56.287224       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1109 14:13:56.292225       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1109 14:13:56.299042       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1109 14:13:56.319455       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:13:56.335112       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:13:56.352845       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1109 14:13:56.571607       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:13:56.601137       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:13:56.618100       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:13:56.624632       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:13:56.634884       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:13:56.667241       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.222.159"}
	I1109 14:13:56.684566       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.157.22"}
	I1109 14:13:57.186693       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:13:59.975579       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:14:00.176611       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:14:00.176611       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:14:00.224948       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [976a1e86747e59d5a7c8cdbc6eaef9d6d0fde3a08e20706cee6160921ddf6689] <==
	I1109 14:13:59.621725       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1109 14:13:59.621798       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:13:59.621815       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1109 14:13:59.621822       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1109 14:13:59.621817       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:13:59.621831       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 14:13:59.624184       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1109 14:13:59.625360       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1109 14:13:59.625382       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1109 14:13:59.625407       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1109 14:13:59.625427       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1109 14:13:59.625447       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1109 14:13:59.625449       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:13:59.625490       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1109 14:13:59.625521       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1109 14:13:59.625527       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1109 14:13:59.625532       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1109 14:13:59.627831       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1109 14:13:59.632091       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1109 14:13:59.633286       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1109 14:13:59.637541       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:13:59.640721       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1109 14:13:59.640744       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1109 14:13:59.642914       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:14:00.179330       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [961da20af9a59ad0af4140da7675b9435e7dcff1b6393a569154a81aa4bfb681] <==
	I1109 14:13:57.156260       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:13:57.233862       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:13:57.334046       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:13:57.334086       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1109 14:13:57.334202       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:13:57.356592       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:13:57.356655       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:13:57.363108       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:13:57.363633       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:13:57.363687       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:13:57.365835       1 config.go:200] "Starting service config controller"
	I1109 14:13:57.365851       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:13:57.365878       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:13:57.365884       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:13:57.365934       1 config.go:309] "Starting node config controller"
	I1109 14:13:57.365975       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:13:57.366007       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:13:57.366369       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:13:57.366384       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:13:57.466548       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:13:57.466560       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:13:57.466586       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [97f5a7b8e8b2ec193df908b13853b3f0d95619f6cc39fc3c693bf5f008f98071] <==
	I1109 14:13:54.662228       1 serving.go:386] Generated self-signed cert in-memory
	W1109 14:13:56.254096       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 14:13:56.254137       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 14:13:56.254149       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 14:13:56.254159       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 14:13:56.276801       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 14:13:56.276834       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:13:56.280063       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:13:56.280109       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:13:56.280856       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:13:56.281088       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 14:13:56.381359       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:14:00 embed-certs-273180 kubelet[718]: I1109 14:14:00.053042     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrfds\" (UniqueName: \"kubernetes.io/projected/65d8f2ce-1bc0-4c22-8527-78d217576a5f-kube-api-access-hrfds\") pod \"dashboard-metrics-scraper-6ffb444bf9-6xx4w\" (UID: \"65d8f2ce-1bc0-4c22-8527-78d217576a5f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w"
	Nov 09 14:14:00 embed-certs-273180 kubelet[718]: I1109 14:14:00.053076     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/65d8f2ce-1bc0-4c22-8527-78d217576a5f-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-6xx4w\" (UID: \"65d8f2ce-1bc0-4c22-8527-78d217576a5f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w"
	Nov 09 14:14:02 embed-certs-273180 kubelet[718]: I1109 14:14:02.010461     718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 09 14:14:04 embed-certs-273180 kubelet[718]: I1109 14:14:04.844566     718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-p4m9s" podStartSLOduration=1.174323397 podStartE2EDuration="4.844532273s" podCreationTimestamp="2025-11-09 14:14:00 +0000 UTC" firstStartedPulling="2025-11-09 14:14:00.375175009 +0000 UTC m=+6.723276310" lastFinishedPulling="2025-11-09 14:14:04.045383894 +0000 UTC m=+10.393485186" observedRunningTime="2025-11-09 14:14:04.844035562 +0000 UTC m=+11.192136865" watchObservedRunningTime="2025-11-09 14:14:04.844532273 +0000 UTC m=+11.192633578"
	Nov 09 14:14:07 embed-certs-273180 kubelet[718]: I1109 14:14:07.839534     718 scope.go:117] "RemoveContainer" containerID="8c767768ee151ac9d5414412223ba650d3d3c39cda0dc2df586cc6548c581e16"
	Nov 09 14:14:08 embed-certs-273180 kubelet[718]: I1109 14:14:08.843865     718 scope.go:117] "RemoveContainer" containerID="8c767768ee151ac9d5414412223ba650d3d3c39cda0dc2df586cc6548c581e16"
	Nov 09 14:14:08 embed-certs-273180 kubelet[718]: I1109 14:14:08.844029     718 scope.go:117] "RemoveContainer" containerID="3819b0255c2b1f9a62c9ba302f4efaf512f149dfe6dcb5f6ec27375143ca9658"
	Nov 09 14:14:08 embed-certs-273180 kubelet[718]: E1109 14:14:08.844228     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6xx4w_kubernetes-dashboard(65d8f2ce-1bc0-4c22-8527-78d217576a5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w" podUID="65d8f2ce-1bc0-4c22-8527-78d217576a5f"
	Nov 09 14:14:09 embed-certs-273180 kubelet[718]: I1109 14:14:09.849125     718 scope.go:117] "RemoveContainer" containerID="3819b0255c2b1f9a62c9ba302f4efaf512f149dfe6dcb5f6ec27375143ca9658"
	Nov 09 14:14:09 embed-certs-273180 kubelet[718]: E1109 14:14:09.849330     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6xx4w_kubernetes-dashboard(65d8f2ce-1bc0-4c22-8527-78d217576a5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w" podUID="65d8f2ce-1bc0-4c22-8527-78d217576a5f"
	Nov 09 14:14:15 embed-certs-273180 kubelet[718]: I1109 14:14:15.524323     718 scope.go:117] "RemoveContainer" containerID="3819b0255c2b1f9a62c9ba302f4efaf512f149dfe6dcb5f6ec27375143ca9658"
	Nov 09 14:14:15 embed-certs-273180 kubelet[718]: E1109 14:14:15.524545     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6xx4w_kubernetes-dashboard(65d8f2ce-1bc0-4c22-8527-78d217576a5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w" podUID="65d8f2ce-1bc0-4c22-8527-78d217576a5f"
	Nov 09 14:14:27 embed-certs-273180 kubelet[718]: I1109 14:14:27.892862     718 scope.go:117] "RemoveContainer" containerID="9e732a5e0ee324731917ec205a3d4fb92d4e86259ec8e5cb6ae0474e3dfee477"
	Nov 09 14:14:28 embed-certs-273180 kubelet[718]: I1109 14:14:28.763435     718 scope.go:117] "RemoveContainer" containerID="3819b0255c2b1f9a62c9ba302f4efaf512f149dfe6dcb5f6ec27375143ca9658"
	Nov 09 14:14:28 embed-certs-273180 kubelet[718]: I1109 14:14:28.896682     718 scope.go:117] "RemoveContainer" containerID="3819b0255c2b1f9a62c9ba302f4efaf512f149dfe6dcb5f6ec27375143ca9658"
	Nov 09 14:14:28 embed-certs-273180 kubelet[718]: I1109 14:14:28.896888     718 scope.go:117] "RemoveContainer" containerID="95f7825f27c31fe22ae58953c8d5a63a0f9a9fead69df32eceb1a491a3f162eb"
	Nov 09 14:14:28 embed-certs-273180 kubelet[718]: E1109 14:14:28.897092     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6xx4w_kubernetes-dashboard(65d8f2ce-1bc0-4c22-8527-78d217576a5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w" podUID="65d8f2ce-1bc0-4c22-8527-78d217576a5f"
	Nov 09 14:14:35 embed-certs-273180 kubelet[718]: I1109 14:14:35.524077     718 scope.go:117] "RemoveContainer" containerID="95f7825f27c31fe22ae58953c8d5a63a0f9a9fead69df32eceb1a491a3f162eb"
	Nov 09 14:14:35 embed-certs-273180 kubelet[718]: E1109 14:14:35.524286     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6xx4w_kubernetes-dashboard(65d8f2ce-1bc0-4c22-8527-78d217576a5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w" podUID="65d8f2ce-1bc0-4c22-8527-78d217576a5f"
	Nov 09 14:14:45 embed-certs-273180 kubelet[718]: I1109 14:14:45.763293     718 scope.go:117] "RemoveContainer" containerID="95f7825f27c31fe22ae58953c8d5a63a0f9a9fead69df32eceb1a491a3f162eb"
	Nov 09 14:14:45 embed-certs-273180 kubelet[718]: E1109 14:14:45.763571     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6xx4w_kubernetes-dashboard(65d8f2ce-1bc0-4c22-8527-78d217576a5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6xx4w" podUID="65d8f2ce-1bc0-4c22-8527-78d217576a5f"
	Nov 09 14:14:47 embed-certs-273180 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:14:47 embed-certs-273180 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:14:47 embed-certs-273180 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 09 14:14:47 embed-certs-273180 systemd[1]: kubelet.service: Consumed 1.583s CPU time.
	
	
	==> kubernetes-dashboard [bb7e516a22712b6d157a69c73f6034857933d87290f12deb79ba4439103faeda] <==
	2025/11/09 14:14:04 Starting overwatch
	2025/11/09 14:14:04 Using namespace: kubernetes-dashboard
	2025/11/09 14:14:04 Using in-cluster config to connect to apiserver
	2025/11/09 14:14:04 Using secret token for csrf signing
	2025/11/09 14:14:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/09 14:14:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/09 14:14:04 Successful initial request to the apiserver, version: v1.34.1
	2025/11/09 14:14:04 Generating JWE encryption key
	2025/11/09 14:14:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/09 14:14:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/09 14:14:04 Initializing JWE encryption key from synchronized object
	2025/11/09 14:14:04 Creating in-cluster Sidecar client
	2025/11/09 14:14:04 Serving insecurely on HTTP port: 9090
	2025/11/09 14:14:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:14:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [9e732a5e0ee324731917ec205a3d4fb92d4e86259ec8e5cb6ae0474e3dfee477] <==
	I1109 14:13:57.117356       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1109 14:14:27.120910       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cf92131e5324bc40f78a4c68dc30e47e8eda7809c531a9acd133ee856f3dc7ba] <==
	I1109 14:14:27.953289       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:14:27.961894       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:14:27.961944       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 14:14:27.964064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:31.419392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:35.679863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:39.278262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:42.332005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:45.354103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:45.359274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:14:45.359413       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:14:45.359491       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ee6b9ad1-7e0f-4b6d-8696-e4410f1b9328", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-273180_7abb159b-6632-488b-91a1-fd07d842913d became leader
	I1109 14:14:45.359530       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-273180_7abb159b-6632-488b-91a1-fd07d842913d!
	W1109 14:14:45.361356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:45.364285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:14:45.459747       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-273180_7abb159b-6632-488b-91a1-fd07d842913d!
	W1109 14:14:47.368373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:47.374156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:49.378518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:49.382748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:51.385674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:14:51.430052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-273180 -n embed-certs-273180
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-273180 -n embed-certs-273180: exit status 2 (369.661982ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-273180 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-326524 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-326524 --alsologtostderr -v=1: exit status 80 (2.188146372s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-326524 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:15:57.249455  298824 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:15:57.249732  298824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:15:57.249742  298824 out.go:374] Setting ErrFile to fd 2...
	I1109 14:15:57.249746  298824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:15:57.249918  298824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:15:57.250153  298824 out.go:368] Setting JSON to false
	I1109 14:15:57.250188  298824 mustload.go:66] Loading cluster: default-k8s-diff-port-326524
	I1109 14:15:57.250464  298824 config.go:182] Loaded profile config "default-k8s-diff-port-326524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:15:57.250855  298824 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-326524 --format={{.State.Status}}
	I1109 14:15:57.274971  298824 host.go:66] Checking if "default-k8s-diff-port-326524" exists ...
	I1109 14:15:57.275502  298824 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:15:57.360340  298824 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-09 14:15:57.344064044 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:15:57.361429  298824 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-326524 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1109 14:15:57.363095  298824 out.go:179] * Pausing node default-k8s-diff-port-326524 ... 
	I1109 14:15:57.365510  298824 host.go:66] Checking if "default-k8s-diff-port-326524" exists ...
	I1109 14:15:57.365858  298824 ssh_runner.go:195] Run: systemctl --version
	I1109 14:15:57.365919  298824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:15:57.395466  298824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa Username:docker}
	I1109 14:15:57.507167  298824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:15:57.524929  298824 pause.go:52] kubelet running: true
	I1109 14:15:57.525018  298824 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:15:57.771837  298824 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:15:57.772085  298824 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:15:57.851211  298824 cri.go:89] found id: "78756eafc8cc650dd8346a21fa319a4f4fd39031ed235b5ff8da5979c38a8ba6"
	I1109 14:15:57.851235  298824 cri.go:89] found id: "db196ab0b527eabaa5ca6448d00c0929a6ddeb5c052739081cb73ceb539b821d"
	I1109 14:15:57.851246  298824 cri.go:89] found id: "fc06f175e4a8df21959410c9b874ceb5942160e55f3c77acdd8326cb0be2a478"
	I1109 14:15:57.851250  298824 cri.go:89] found id: "ebf68a39b2ef31de8b38938ff0fda338ca0858e9fd7cc54035465ac606412dc9"
	I1109 14:15:57.851253  298824 cri.go:89] found id: "4b5a253a8c0774e804e83d03fcc6cdd58c8b3baf291ecfb31bc3eb73c12fa77d"
	I1109 14:15:57.851256  298824 cri.go:89] found id: "7ab8f2cac821afdbf2b15cac151b0b2e02e8e8d57071e97a867dc8ec28d4c7f2"
	I1109 14:15:57.851259  298824 cri.go:89] found id: "fbe03639cf3cf12216d6d619a7ac216d6482e7d7722f4c3ff0c7021ea98e7f30"
	I1109 14:15:57.851261  298824 cri.go:89] found id: "5c183e798015e8d24e58b5d9a3615af03e163ce2046d50f89cd8270f5eed3b9f"
	I1109 14:15:57.851264  298824 cri.go:89] found id: "837343655ca08ffad86a95fba8e051cfacdce4058b4c6aebfef423a9a95ad170"
	I1109 14:15:57.851270  298824 cri.go:89] found id: "f64494efa5bee134a9a5d0bbcdb057a51a842044c2230e1f2ea2ef9aa0b9654f"
	I1109 14:15:57.851272  298824 cri.go:89] found id: "86d787fc7b9fc4076e72a30dca4ee7586b81d535a1d2635a796c6746370cdcd2"
	I1109 14:15:57.851275  298824 cri.go:89] found id: ""
	I1109 14:15:57.851312  298824 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:15:57.865156  298824 retry.go:31] will retry after 155.021646ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:15:57Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:15:58.020736  298824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:15:58.039256  298824 pause.go:52] kubelet running: false
	I1109 14:15:58.039313  298824 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:15:58.233528  298824 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:15:58.233622  298824 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:15:58.305325  298824 cri.go:89] found id: "78756eafc8cc650dd8346a21fa319a4f4fd39031ed235b5ff8da5979c38a8ba6"
	I1109 14:15:58.305362  298824 cri.go:89] found id: "db196ab0b527eabaa5ca6448d00c0929a6ddeb5c052739081cb73ceb539b821d"
	I1109 14:15:58.305368  298824 cri.go:89] found id: "fc06f175e4a8df21959410c9b874ceb5942160e55f3c77acdd8326cb0be2a478"
	I1109 14:15:58.305373  298824 cri.go:89] found id: "ebf68a39b2ef31de8b38938ff0fda338ca0858e9fd7cc54035465ac606412dc9"
	I1109 14:15:58.305378  298824 cri.go:89] found id: "4b5a253a8c0774e804e83d03fcc6cdd58c8b3baf291ecfb31bc3eb73c12fa77d"
	I1109 14:15:58.305383  298824 cri.go:89] found id: "7ab8f2cac821afdbf2b15cac151b0b2e02e8e8d57071e97a867dc8ec28d4c7f2"
	I1109 14:15:58.305388  298824 cri.go:89] found id: "fbe03639cf3cf12216d6d619a7ac216d6482e7d7722f4c3ff0c7021ea98e7f30"
	I1109 14:15:58.305392  298824 cri.go:89] found id: "5c183e798015e8d24e58b5d9a3615af03e163ce2046d50f89cd8270f5eed3b9f"
	I1109 14:15:58.305397  298824 cri.go:89] found id: "837343655ca08ffad86a95fba8e051cfacdce4058b4c6aebfef423a9a95ad170"
	I1109 14:15:58.305413  298824 cri.go:89] found id: "f64494efa5bee134a9a5d0bbcdb057a51a842044c2230e1f2ea2ef9aa0b9654f"
	I1109 14:15:58.305419  298824 cri.go:89] found id: "86d787fc7b9fc4076e72a30dca4ee7586b81d535a1d2635a796c6746370cdcd2"
	I1109 14:15:58.305423  298824 cri.go:89] found id: ""
	I1109 14:15:58.305466  298824 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:15:58.319760  298824 retry.go:31] will retry after 191.488858ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:15:58Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:15:58.512183  298824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:15:58.525594  298824 pause.go:52] kubelet running: false
	I1109 14:15:58.525666  298824 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:15:58.672684  298824 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:15:58.672772  298824 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:15:58.737343  298824 cri.go:89] found id: "78756eafc8cc650dd8346a21fa319a4f4fd39031ed235b5ff8da5979c38a8ba6"
	I1109 14:15:58.737368  298824 cri.go:89] found id: "db196ab0b527eabaa5ca6448d00c0929a6ddeb5c052739081cb73ceb539b821d"
	I1109 14:15:58.737373  298824 cri.go:89] found id: "fc06f175e4a8df21959410c9b874ceb5942160e55f3c77acdd8326cb0be2a478"
	I1109 14:15:58.737378  298824 cri.go:89] found id: "ebf68a39b2ef31de8b38938ff0fda338ca0858e9fd7cc54035465ac606412dc9"
	I1109 14:15:58.737382  298824 cri.go:89] found id: "4b5a253a8c0774e804e83d03fcc6cdd58c8b3baf291ecfb31bc3eb73c12fa77d"
	I1109 14:15:58.737386  298824 cri.go:89] found id: "7ab8f2cac821afdbf2b15cac151b0b2e02e8e8d57071e97a867dc8ec28d4c7f2"
	I1109 14:15:58.737390  298824 cri.go:89] found id: "fbe03639cf3cf12216d6d619a7ac216d6482e7d7722f4c3ff0c7021ea98e7f30"
	I1109 14:15:58.737394  298824 cri.go:89] found id: "5c183e798015e8d24e58b5d9a3615af03e163ce2046d50f89cd8270f5eed3b9f"
	I1109 14:15:58.737398  298824 cri.go:89] found id: "837343655ca08ffad86a95fba8e051cfacdce4058b4c6aebfef423a9a95ad170"
	I1109 14:15:58.737406  298824 cri.go:89] found id: "f64494efa5bee134a9a5d0bbcdb057a51a842044c2230e1f2ea2ef9aa0b9654f"
	I1109 14:15:58.737413  298824 cri.go:89] found id: "86d787fc7b9fc4076e72a30dca4ee7586b81d535a1d2635a796c6746370cdcd2"
	I1109 14:15:58.737417  298824 cri.go:89] found id: ""
	I1109 14:15:58.737462  298824 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:15:58.749533  298824 retry.go:31] will retry after 396.588081ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:15:58Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:15:59.147163  298824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:15:59.160109  298824 pause.go:52] kubelet running: false
	I1109 14:15:59.160191  298824 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:15:59.296505  298824 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:15:59.296594  298824 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:15:59.360238  298824 cri.go:89] found id: "78756eafc8cc650dd8346a21fa319a4f4fd39031ed235b5ff8da5979c38a8ba6"
	I1109 14:15:59.360262  298824 cri.go:89] found id: "db196ab0b527eabaa5ca6448d00c0929a6ddeb5c052739081cb73ceb539b821d"
	I1109 14:15:59.360266  298824 cri.go:89] found id: "fc06f175e4a8df21959410c9b874ceb5942160e55f3c77acdd8326cb0be2a478"
	I1109 14:15:59.360269  298824 cri.go:89] found id: "ebf68a39b2ef31de8b38938ff0fda338ca0858e9fd7cc54035465ac606412dc9"
	I1109 14:15:59.360271  298824 cri.go:89] found id: "4b5a253a8c0774e804e83d03fcc6cdd58c8b3baf291ecfb31bc3eb73c12fa77d"
	I1109 14:15:59.360274  298824 cri.go:89] found id: "7ab8f2cac821afdbf2b15cac151b0b2e02e8e8d57071e97a867dc8ec28d4c7f2"
	I1109 14:15:59.360277  298824 cri.go:89] found id: "fbe03639cf3cf12216d6d619a7ac216d6482e7d7722f4c3ff0c7021ea98e7f30"
	I1109 14:15:59.360279  298824 cri.go:89] found id: "5c183e798015e8d24e58b5d9a3615af03e163ce2046d50f89cd8270f5eed3b9f"
	I1109 14:15:59.360281  298824 cri.go:89] found id: "837343655ca08ffad86a95fba8e051cfacdce4058b4c6aebfef423a9a95ad170"
	I1109 14:15:59.360294  298824 cri.go:89] found id: "f64494efa5bee134a9a5d0bbcdb057a51a842044c2230e1f2ea2ef9aa0b9654f"
	I1109 14:15:59.360297  298824 cri.go:89] found id: "86d787fc7b9fc4076e72a30dca4ee7586b81d535a1d2635a796c6746370cdcd2"
	I1109 14:15:59.360299  298824 cri.go:89] found id: ""
	I1109 14:15:59.360340  298824 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:15:59.373289  298824 out.go:203] 
	W1109 14:15:59.374401  298824 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:15:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:15:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 14:15:59.374416  298824 out.go:285] * 
	* 
	W1109 14:15:59.378464  298824 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 14:15:59.379528  298824 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-326524 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-326524
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-326524:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9",
	        "Created": "2025-11-09T14:13:22.347253658Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 287963,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:15:02.008420696Z",
	            "FinishedAt": "2025-11-09T14:14:58.821452172Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9/hostname",
	        "HostsPath": "/var/lib/docker/containers/4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9/hosts",
	        "LogPath": "/var/lib/docker/containers/4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9/4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9-json.log",
	        "Name": "/default-k8s-diff-port-326524",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-326524:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-326524",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9",
	                "LowerDir": "/var/lib/docker/overlay2/6090cd65b9b7b71056ab21b51f3d0835e7a09039168090d26312aabaa0dba784-init/diff:/var/lib/docker/overlay2/d0c6d4b4ffbfb6af64ec3408030fc54111fe30d500088a5ae947d598cfc72f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6090cd65b9b7b71056ab21b51f3d0835e7a09039168090d26312aabaa0dba784/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6090cd65b9b7b71056ab21b51f3d0835e7a09039168090d26312aabaa0dba784/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6090cd65b9b7b71056ab21b51f3d0835e7a09039168090d26312aabaa0dba784/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-326524",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-326524/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-326524",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-326524",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-326524",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "628f5f34a67d2308d8573aea56f7b31953d9374a115a545d55b4d3066ed1f45d",
	            "SandboxKey": "/var/run/docker/netns/628f5f34a67d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-326524": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:6f:0c:a7:3c:a3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1418d8b0aecfeebbb964747ce9f2239c14745f39f121eb76b984b7589e5562c5",
	                    "EndpointID": "077158bde996f64749ced02646b419379755061a9a152ab723aa6cc72d97cf06",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-326524",
	                        "4d5e864b1f2e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-326524 -n default-k8s-diff-port-326524
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-326524 -n default-k8s-diff-port-326524: exit status 2 (309.580434ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-326524 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-326524 logs -n 25: (1.065769928s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-593530 sudo systemctl status docker --all --full --no-pager                                                                                                      │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │                     │
	│ ssh     │ -p auto-593530 sudo systemctl cat docker --no-pager                                                                                                                      │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo cat /etc/docker/daemon.json                                                                                                                          │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │                     │
	│ ssh     │ -p auto-593530 sudo docker system info                                                                                                                                   │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-326524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ start   │ -p default-k8s-diff-port-326524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo systemctl status cri-docker --all --full --no-pager                                                                                                  │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │                     │
	│ ssh     │ -p auto-593530 sudo systemctl cat cri-docker --no-pager                                                                                                                  │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                             │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │                     │
	│ ssh     │ -p auto-593530 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                       │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo cri-dockerd --version                                                                                                                                │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo systemctl status containerd --all --full --no-pager                                                                                                  │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │                     │
	│ ssh     │ -p auto-593530 sudo systemctl cat containerd --no-pager                                                                                                                  │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo cat /lib/systemd/system/containerd.service                                                                                                           │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo cat /etc/containerd/config.toml                                                                                                                      │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo containerd config dump                                                                                                                               │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo systemctl status crio --all --full --no-pager                                                                                                        │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo systemctl cat crio --no-pager                                                                                                                        │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                              │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo crio config                                                                                                                                          │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ delete  │ -p auto-593530                                                                                                                                                           │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ start   │ -p custom-flannel-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio       │ custom-flannel-593530        │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │                     │
	│ ssh     │ -p calico-593530 pgrep -a kubelet                                                                                                                                        │ calico-593530                │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ image   │ default-k8s-diff-port-326524 image list --format=json                                                                                                                    │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ pause   │ -p default-k8s-diff-port-326524 --alsologtostderr -v=1                                                                                                                   │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:15:09
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:15:09.248163  292305 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:15:09.248321  292305 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:15:09.248332  292305 out.go:374] Setting ErrFile to fd 2...
	I1109 14:15:09.248338  292305 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:15:09.248568  292305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:15:09.249093  292305 out.go:368] Setting JSON to false
	I1109 14:15:09.250527  292305 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3459,"bootTime":1762694250,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:15:09.250615  292305 start.go:143] virtualization: kvm guest
	I1109 14:15:09.252574  292305 out.go:179] * [custom-flannel-593530] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:15:09.254010  292305 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:15:09.254018  292305 notify.go:221] Checking for updates...
	I1109 14:15:09.256418  292305 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:15:09.258018  292305 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:15:09.259118  292305 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 14:15:09.260250  292305 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:15:09.261303  292305 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:15:07.998325  287405 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-326524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:15:08.018527  287405 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1109 14:15:08.023949  287405 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:15:08.037050  287405 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-326524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-326524 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:15:08.037213  287405 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:15:08.037278  287405 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:15:08.076018  287405 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:15:08.076038  287405 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:15:08.076087  287405 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:15:08.108788  287405 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:15:08.108812  287405 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:15:08.108821  287405 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1109 14:15:08.108942  287405 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-326524 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-326524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:15:08.109019  287405 ssh_runner.go:195] Run: crio config
	I1109 14:15:08.178530  287405 cni.go:84] Creating CNI manager for ""
	I1109 14:15:08.178555  287405 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:15:08.178572  287405 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:15:08.178597  287405 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-326524 NodeName:default-k8s-diff-port-326524 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:15:08.178780  287405 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-326524"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:15:08.178859  287405 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:15:08.188730  287405 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:15:08.188785  287405 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:15:08.196529  287405 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1109 14:15:08.212596  287405 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:15:08.228685  287405 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1109 14:15:08.244850  287405 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:15:08.249630  287405 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:15:08.262257  287405 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:15:08.355912  287405 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:15:08.379875  287405 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524 for IP: 192.168.85.2
	I1109 14:15:08.379900  287405 certs.go:195] generating shared ca certs ...
	I1109 14:15:08.379921  287405 certs.go:227] acquiring lock for ca certs: {Name:mk1fbc7fb5aaf0e87090d75145f91c095ef07289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:08.380082  287405 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key
	I1109 14:15:08.380135  287405 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key
	I1109 14:15:08.380146  287405 certs.go:257] generating profile certs ...
	I1109 14:15:08.380246  287405 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/client.key
	I1109 14:15:08.380319  287405 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.key.cfdee782
	I1109 14:15:08.380365  287405 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.key
	I1109 14:15:08.380496  287405 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem (1338 bytes)
	W1109 14:15:08.380534  287405 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365_empty.pem, impossibly tiny 0 bytes
	I1109 14:15:08.380548  287405 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:15:08.380579  287405 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 14:15:08.380615  287405 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:15:08.380663  287405 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 14:15:08.380718  287405 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:15:08.381502  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:15:08.402440  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:15:08.436548  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:15:08.463069  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:15:08.501671  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:15:08.519946  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:15:08.537881  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:15:08.553757  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:15:08.570148  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:15:08.588077  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem --> /usr/share/ca-certificates/9365.pem (1338 bytes)
	I1109 14:15:08.606687  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /usr/share/ca-certificates/93652.pem (1708 bytes)
	I1109 14:15:08.625632  287405 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:15:08.640330  287405 ssh_runner.go:195] Run: openssl version
	I1109 14:15:08.652070  287405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9365.pem && ln -fs /usr/share/ca-certificates/9365.pem /etc/ssl/certs/9365.pem"
	I1109 14:15:08.664725  287405 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9365.pem
	I1109 14:15:08.669445  287405 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:34 /usr/share/ca-certificates/9365.pem
	I1109 14:15:08.669499  287405 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9365.pem
	I1109 14:15:08.719098  287405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9365.pem /etc/ssl/certs/51391683.0"
	I1109 14:15:08.727517  287405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93652.pem && ln -fs /usr/share/ca-certificates/93652.pem /etc/ssl/certs/93652.pem"
	I1109 14:15:08.736745  287405 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93652.pem
	I1109 14:15:08.740387  287405 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:34 /usr/share/ca-certificates/93652.pem
	I1109 14:15:08.740441  287405 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93652.pem
	I1109 14:15:08.780445  287405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93652.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:15:08.788558  287405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:15:08.797526  287405 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:15:08.801330  287405 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:15:08.801399  287405 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:15:08.847249  287405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:15:08.856216  287405 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:15:08.860420  287405 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:15:08.924429  287405 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:15:08.984219  287405 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:15:09.062290  287405 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:15:09.125578  287405 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:15:09.186578  287405 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:15:09.249713  287405 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-326524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-326524 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:15:09.249823  287405 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:15:09.249875  287405 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:15:09.291281  287405 cri.go:89] found id: "7ab8f2cac821afdbf2b15cac151b0b2e02e8e8d57071e97a867dc8ec28d4c7f2"
	I1109 14:15:09.291304  287405 cri.go:89] found id: "fbe03639cf3cf12216d6d619a7ac216d6482e7d7722f4c3ff0c7021ea98e7f30"
	I1109 14:15:09.291311  287405 cri.go:89] found id: "5c183e798015e8d24e58b5d9a3615af03e163ce2046d50f89cd8270f5eed3b9f"
	I1109 14:15:09.291322  287405 cri.go:89] found id: "837343655ca08ffad86a95fba8e051cfacdce4058b4c6aebfef423a9a95ad170"
	I1109 14:15:09.291327  287405 cri.go:89] found id: ""
	I1109 14:15:09.291369  287405 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 14:15:09.306193  287405 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:15:09Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:15:09.306276  287405 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:15:09.316622  287405 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:15:09.316755  287405 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:15:09.316805  287405 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:15:09.326991  287405 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:15:09.327457  287405 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-326524" does not appear in /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:15:09.327561  287405 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-5854/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-326524" cluster setting kubeconfig missing "default-k8s-diff-port-326524" context setting]
	I1109 14:15:09.328023  287405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:09.331761  287405 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:15:09.342472  287405 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1109 14:15:09.342495  287405 kubeadm.go:602] duration metric: took 25.723126ms to restartPrimaryControlPlane
	I1109 14:15:09.342505  287405 kubeadm.go:403] duration metric: took 92.801476ms to StartCluster
	I1109 14:15:09.342521  287405 settings.go:142] acquiring lock: {Name:mk4e77b290eba64589404bd2a5f48c72505ab262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:09.342570  287405 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:15:09.343328  287405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:09.343916  287405 config.go:182] Loaded profile config "default-k8s-diff-port-326524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:15:09.344014  287405 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:15:09.344157  287405 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:15:09.344254  287405 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-326524"
	I1109 14:15:09.344274  287405 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-326524"
	W1109 14:15:09.344282  287405 addons.go:248] addon storage-provisioner should already be in state true
	I1109 14:15:09.344307  287405 host.go:66] Checking if "default-k8s-diff-port-326524" exists ...
	I1109 14:15:09.344566  287405 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-326524"
	I1109 14:15:09.344603  287405 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-326524"
	I1109 14:15:09.344705  287405 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-326524"
	I1109 14:15:09.344849  287405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-326524 --format={{.State.Status}}
	I1109 14:15:09.345147  287405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-326524 --format={{.State.Status}}
	I1109 14:15:09.344727  287405 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-326524"
	W1109 14:15:09.345753  287405 addons.go:248] addon dashboard should already be in state true
	I1109 14:15:09.345784  287405 out.go:179] * Verifying Kubernetes components...
	I1109 14:15:09.345789  287405 host.go:66] Checking if "default-k8s-diff-port-326524" exists ...
	I1109 14:15:09.346259  287405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-326524 --format={{.State.Status}}
	I1109 14:15:09.347004  287405 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:15:09.379753  287405 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-326524"
	W1109 14:15:09.379777  287405 addons.go:248] addon default-storageclass should already be in state true
	I1109 14:15:09.379804  287405 host.go:66] Checking if "default-k8s-diff-port-326524" exists ...
	I1109 14:15:09.380240  287405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-326524 --format={{.State.Status}}
	I1109 14:15:09.383272  287405 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1109 14:15:09.384712  287405 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1109 14:15:09.385962  287405 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1109 14:15:09.385982  287405 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1109 14:15:09.386037  287405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:15:09.387682  287405 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:15:09.263052  292305 config.go:182] Loaded profile config "calico-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:15:09.263200  292305 config.go:182] Loaded profile config "default-k8s-diff-port-326524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:15:09.263315  292305 config.go:182] Loaded profile config "kindnet-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:15:09.263425  292305 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:15:09.297076  292305 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 14:15:09.297210  292305 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:15:09.414574  292305 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-09 14:15:09.383822674 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:15:09.415141  292305 docker.go:319] overlay module found
	I1109 14:15:09.419152  292305 out.go:179] * Using the docker driver based on user configuration
	I1109 14:15:09.421492  292305 start.go:309] selected driver: docker
	I1109 14:15:09.421505  292305 start.go:930] validating driver "docker" against <nil>
	I1109 14:15:09.421519  292305 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:15:09.422328  292305 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:15:09.527729  292305 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-09 14:15:09.513356284 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:15:09.527946  292305 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 14:15:09.528222  292305 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:15:09.529728  292305 out.go:179] * Using Docker driver with root privileges
	I1109 14:15:09.530718  292305 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1109 14:15:09.530764  292305 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1109 14:15:09.530851  292305 start.go:353] cluster config:
	{Name:custom-flannel-593530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-593530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:15:09.532128  292305 out.go:179] * Starting "custom-flannel-593530" primary control-plane node in "custom-flannel-593530" cluster
	I1109 14:15:09.533083  292305 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:15:09.534234  292305 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:15:09.535313  292305 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:15:09.535341  292305 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 14:15:09.535349  292305 cache.go:65] Caching tarball of preloaded images
	I1109 14:15:09.535436  292305 preload.go:238] Found /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 14:15:09.535366  292305 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:15:09.535449  292305 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:15:09.535556  292305 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/config.json ...
	I1109 14:15:09.535588  292305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/config.json: {Name:mkbad36af8dabb255f57147eb5cb60362f4e098d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:09.562797  292305 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:15:09.562819  292305 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:15:09.562837  292305 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:15:09.562867  292305 start.go:360] acquireMachinesLock for custom-flannel-593530: {Name:mk5f212c6ccd0d4ce7db5d28c9e6cf64be85fa38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:15:09.562976  292305 start.go:364] duration metric: took 91.057µs to acquireMachinesLock for "custom-flannel-593530"
	I1109 14:15:09.563001  292305 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-593530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-593530 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:15:09.563084  292305 start.go:125] createHost starting for "" (driver="docker")
	I1109 14:15:09.613006  280419 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 14:15:09.613082  280419 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:15:09.613213  280419 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 14:15:09.613292  280419 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1109 14:15:09.613344  280419 kubeadm.go:319] OS: Linux
	I1109 14:15:09.613411  280419 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 14:15:09.613482  280419 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 14:15:09.613554  280419 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 14:15:09.613624  280419 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 14:15:09.614617  280419 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 14:15:09.614745  280419 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 14:15:09.614819  280419 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 14:15:09.614904  280419 kubeadm.go:319] CGROUPS_IO: enabled
	I1109 14:15:09.615012  280419 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:15:09.615151  280419 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:15:09.615271  280419 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 14:15:09.615367  280419 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:15:09.617112  280419 out.go:252]   - Generating certificates and keys ...
	I1109 14:15:09.617211  280419 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:15:09.617322  280419 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:15:09.617413  280419 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 14:15:09.617488  280419 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 14:15:09.617572  280419 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 14:15:09.617661  280419 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 14:15:09.617733  280419 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 14:15:09.617875  280419 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-593530 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1109 14:15:09.617947  280419 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 14:15:09.618094  280419 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-593530 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1109 14:15:09.618177  280419 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 14:15:09.618254  280419 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 14:15:09.618315  280419 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 14:15:09.618400  280419 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:15:09.618460  280419 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:15:09.618531  280419 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 14:15:09.618598  280419 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:15:09.618706  280419 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:15:09.618780  280419 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:15:09.618875  280419 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:15:09.618955  280419 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:15:09.620172  280419 out.go:252]   - Booting up control plane ...
	I1109 14:15:09.620277  280419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:15:09.620376  280419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:15:09.620455  280419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:15:09.620586  280419 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:15:09.620726  280419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 14:15:09.620858  280419 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 14:15:09.620963  280419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:15:09.621009  280419 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:15:09.621179  280419 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 14:15:09.621302  280419 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 14:15:09.621370  280419 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.003113792s
	I1109 14:15:09.621480  280419 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 14:15:09.621577  280419 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1109 14:15:09.621736  280419 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 14:15:09.621836  280419 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 14:15:09.621931  280419 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.067158098s
	I1109 14:15:09.622019  280419 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.468471515s
	I1109 14:15:09.622101  280419 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001957862s
	I1109 14:15:09.622234  280419 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 14:15:09.622380  280419 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 14:15:09.622449  280419 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 14:15:09.622795  280419 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-593530 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 14:15:09.622946  280419 kubeadm.go:319] [bootstrap-token] Using token: dy2agk.tr0oebul2kwwo3mm
	I1109 14:15:09.626667  280419 out.go:252]   - Configuring RBAC rules ...
	I1109 14:15:09.626905  280419 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 14:15:09.627128  280419 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 14:15:09.627309  280419 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 14:15:09.627470  280419 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 14:15:09.627620  280419 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 14:15:09.627745  280419 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 14:15:09.627889  280419 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 14:15:09.627945  280419 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 14:15:09.628008  280419 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 14:15:09.628014  280419 kubeadm.go:319] 
	I1109 14:15:09.628090  280419 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 14:15:09.628095  280419 kubeadm.go:319] 
	I1109 14:15:09.628190  280419 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 14:15:09.628196  280419 kubeadm.go:319] 
	I1109 14:15:09.628226  280419 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 14:15:09.628297  280419 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 14:15:09.628361  280419 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 14:15:09.628366  280419 kubeadm.go:319] 
	I1109 14:15:09.628432  280419 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 14:15:09.628438  280419 kubeadm.go:319] 
	I1109 14:15:09.628494  280419 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 14:15:09.628499  280419 kubeadm.go:319] 
	I1109 14:15:09.628563  280419 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 14:15:09.628688  280419 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 14:15:09.628768  280419 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 14:15:09.628774  280419 kubeadm.go:319] 
	I1109 14:15:09.628876  280419 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 14:15:09.628974  280419 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 14:15:09.628980  280419 kubeadm.go:319] 
	I1109 14:15:09.629088  280419 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token dy2agk.tr0oebul2kwwo3mm \
	I1109 14:15:09.629212  280419 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f \
	I1109 14:15:09.629238  280419 kubeadm.go:319] 	--control-plane 
	I1109 14:15:09.629244  280419 kubeadm.go:319] 
	I1109 14:15:09.629356  280419 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 14:15:09.629361  280419 kubeadm.go:319] 
	I1109 14:15:09.629467  280419 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token dy2agk.tr0oebul2kwwo3mm \
	I1109 14:15:09.629610  280419 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f 
	I1109 14:15:09.629623  280419 cni.go:84] Creating CNI manager for "kindnet"
	I1109 14:15:09.631799  280419 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1109 14:15:06.718913  285057 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/calico-593530/proxy-client.crt ...
	I1109 14:15:06.718949  285057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/calico-593530/proxy-client.crt: {Name:mk8405f7379ebaff761135288cc47f11de920497 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:06.719144  285057 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/calico-593530/proxy-client.key ...
	I1109 14:15:06.719172  285057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/calico-593530/proxy-client.key: {Name:mk54c17d3f53ae6d6e4d043d25e865cefdb18ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:06.719450  285057 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem (1338 bytes)
	W1109 14:15:06.719505  285057 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365_empty.pem, impossibly tiny 0 bytes
	I1109 14:15:06.719521  285057 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:15:06.719552  285057 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 14:15:06.719654  285057 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:15:06.719701  285057 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 14:15:06.719756  285057 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:15:06.720506  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:15:06.747314  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:15:06.772596  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:15:06.794513  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:15:06.814162  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/calico-593530/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1109 14:15:06.834065  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/calico-593530/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:15:06.851457  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/calico-593530/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:15:06.869828  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/calico-593530/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 14:15:06.889071  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem --> /usr/share/ca-certificates/9365.pem (1338 bytes)
	I1109 14:15:06.910425  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /usr/share/ca-certificates/93652.pem (1708 bytes)
	I1109 14:15:06.930131  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:15:06.948590  285057 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:15:06.960731  285057 ssh_runner.go:195] Run: openssl version
	I1109 14:15:06.966872  285057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9365.pem && ln -fs /usr/share/ca-certificates/9365.pem /etc/ssl/certs/9365.pem"
	I1109 14:15:06.974823  285057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9365.pem
	I1109 14:15:06.978195  285057 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:34 /usr/share/ca-certificates/9365.pem
	I1109 14:15:06.978237  285057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9365.pem
	I1109 14:15:07.020774  285057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9365.pem /etc/ssl/certs/51391683.0"
	I1109 14:15:07.037155  285057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93652.pem && ln -fs /usr/share/ca-certificates/93652.pem /etc/ssl/certs/93652.pem"
	I1109 14:15:07.048994  285057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93652.pem
	I1109 14:15:07.053326  285057 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:34 /usr/share/ca-certificates/93652.pem
	I1109 14:15:07.053381  285057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93652.pem
	I1109 14:15:07.092088  285057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93652.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:15:07.100971  285057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:15:07.109171  285057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:15:07.113081  285057 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:15:07.113125  285057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:15:07.148602  285057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:15:07.156615  285057 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:15:07.160159  285057 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:15:07.160212  285057 kubeadm.go:401] StartCluster: {Name:calico-593530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-593530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:15:07.160295  285057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:15:07.160337  285057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:15:07.186892  285057 cri.go:89] found id: ""
	I1109 14:15:07.186946  285057 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:15:07.194455  285057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:15:07.202432  285057 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:15:07.202479  285057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:15:07.213867  285057 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:15:07.213887  285057 kubeadm.go:158] found existing configuration files:
	
	I1109 14:15:07.213940  285057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 14:15:07.222241  285057 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:15:07.222288  285057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:15:07.229031  285057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 14:15:07.236011  285057 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:15:07.236050  285057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:15:07.242534  285057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 14:15:07.249612  285057 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:15:07.249674  285057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:15:07.256235  285057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 14:15:07.263172  285057 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:15:07.263206  285057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:15:07.270159  285057 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:15:07.331249  285057 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1109 14:15:07.413688  285057 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 14:15:09.388811  287405 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:15:09.388836  287405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:15:09.388883  287405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:15:09.418557  287405 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:15:09.418582  287405 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:15:09.418634  287405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:15:09.431172  287405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa Username:docker}
	I1109 14:15:09.435076  287405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa Username:docker}
	I1109 14:15:09.457734  287405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa Username:docker}
	I1109 14:15:09.537561  287405 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:15:09.556995  287405 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-326524" to be "Ready" ...
	I1109 14:15:09.589031  287405 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1109 14:15:09.589061  287405 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1109 14:15:09.592931  287405 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:15:09.602283  287405 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:15:09.616485  287405 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1109 14:15:09.616505  287405 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1109 14:15:09.642211  287405 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1109 14:15:09.642246  287405 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1109 14:15:09.668015  287405 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1109 14:15:09.668035  287405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1109 14:15:09.709885  287405 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1109 14:15:09.709963  287405 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1109 14:15:09.727497  287405 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1109 14:15:09.727521  287405 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1109 14:15:09.750226  287405 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1109 14:15:09.750248  287405 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1109 14:15:09.768986  287405 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1109 14:15:09.769008  287405 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1109 14:15:09.787904  287405 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:15:09.787927  287405 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1109 14:15:09.810073  287405 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:15:11.300008  287405 node_ready.go:49] node "default-k8s-diff-port-326524" is "Ready"
	I1109 14:15:11.300040  287405 node_ready.go:38] duration metric: took 1.742965251s for node "default-k8s-diff-port-326524" to be "Ready" ...
	I1109 14:15:11.300056  287405 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:15:11.300105  287405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:15:12.002766  287405 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.40979526s)
	I1109 14:15:12.002831  287405 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.400519777s)
	I1109 14:15:12.002983  287405 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.192863732s)
	I1109 14:15:12.003102  287405 api_server.go:72] duration metric: took 2.65905681s to wait for apiserver process to appear ...
	I1109 14:15:12.003119  287405 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:15:12.003140  287405 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1109 14:15:12.005628  287405 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-326524 addons enable metrics-server
	
	I1109 14:15:12.009263  287405 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:15:12.009299  287405 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:15:12.010799  287405 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1109 14:15:09.633271  280419 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 14:15:09.638605  280419 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 14:15:09.640169  280419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1109 14:15:09.663440  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 14:15:10.050955  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:10.051005  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-593530 minikube.k8s.io/updated_at=2025_11_09T14_15_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=kindnet-593530 minikube.k8s.io/primary=true
	I1109 14:15:10.051034  280419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:15:10.177520  280419 ops.go:34] apiserver oom_adj: -16
	I1109 14:15:10.177618  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:10.678358  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:11.177786  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:11.677754  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:12.178549  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:09.564720  292305 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1109 14:15:09.564951  292305 start.go:159] libmachine.API.Create for "custom-flannel-593530" (driver="docker")
	I1109 14:15:09.564986  292305 client.go:173] LocalClient.Create starting
	I1109 14:15:09.565056  292305 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem
	I1109 14:15:09.565105  292305 main.go:143] libmachine: Decoding PEM data...
	I1109 14:15:09.565126  292305 main.go:143] libmachine: Parsing certificate...
	I1109 14:15:09.565189  292305 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem
	I1109 14:15:09.565216  292305 main.go:143] libmachine: Decoding PEM data...
	I1109 14:15:09.565233  292305 main.go:143] libmachine: Parsing certificate...
	I1109 14:15:09.565620  292305 cli_runner.go:164] Run: docker network inspect custom-flannel-593530 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 14:15:09.591661  292305 cli_runner.go:211] docker network inspect custom-flannel-593530 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 14:15:09.591745  292305 network_create.go:284] running [docker network inspect custom-flannel-593530] to gather additional debugging logs...
	I1109 14:15:09.591764  292305 cli_runner.go:164] Run: docker network inspect custom-flannel-593530
	W1109 14:15:09.614454  292305 cli_runner.go:211] docker network inspect custom-flannel-593530 returned with exit code 1
	I1109 14:15:09.614482  292305 network_create.go:287] error running [docker network inspect custom-flannel-593530]: docker network inspect custom-flannel-593530: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-593530 not found
	I1109 14:15:09.614498  292305 network_create.go:289] output of [docker network inspect custom-flannel-593530]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-593530 not found
	
	** /stderr **
	I1109 14:15:09.614627  292305 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:15:09.640818  292305 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c085420cd01a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:dd:1a:ae:de:73} reservation:<nil>}
	I1109 14:15:09.641739  292305 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-227a1511ff5a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4a:00:84:88:a9:17} reservation:<nil>}
	I1109 14:15:09.642760  292305 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9196665a99b4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:6c:f7:d0:28:4f} reservation:<nil>}
	I1109 14:15:09.643403  292305 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e84b4000fff1 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:9e:5b:47:b5:f4} reservation:<nil>}
	I1109 14:15:09.644167  292305 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-1418d8b0aecf IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:96:45:f5:f6:93:a3} reservation:<nil>}
	I1109 14:15:09.645056  292305 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-2d9896d17cc8 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:0e:b1:bc:bf:18:60} reservation:<nil>}
	I1109 14:15:09.646141  292305 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f05260}
	I1109 14:15:09.646224  292305 network_create.go:124] attempt to create docker network custom-flannel-593530 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1109 14:15:09.646294  292305 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-593530 custom-flannel-593530
	I1109 14:15:09.732382  292305 network_create.go:108] docker network custom-flannel-593530 192.168.103.0/24 created
	I1109 14:15:09.732444  292305 kic.go:121] calculated static IP "192.168.103.2" for the "custom-flannel-593530" container
	I1109 14:15:09.732511  292305 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 14:15:09.757786  292305 cli_runner.go:164] Run: docker volume create custom-flannel-593530 --label name.minikube.sigs.k8s.io=custom-flannel-593530 --label created_by.minikube.sigs.k8s.io=true
	I1109 14:15:09.789747  292305 oci.go:103] Successfully created a docker volume custom-flannel-593530
	I1109 14:15:09.789853  292305 cli_runner.go:164] Run: docker run --rm --name custom-flannel-593530-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-593530 --entrypoint /usr/bin/test -v custom-flannel-593530:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 14:15:10.324279  292305 oci.go:107] Successfully prepared a docker volume custom-flannel-593530
	I1109 14:15:10.324372  292305 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:15:10.324389  292305 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 14:15:10.324490  292305 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-593530:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 14:15:12.678165  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:13.177708  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:13.677938  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:14.178225  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:14.677756  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:15.178130  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:15.301383  280419 kubeadm.go:1114] duration metric: took 5.250490884s to wait for elevateKubeSystemPrivileges
	I1109 14:15:15.301418  280419 kubeadm.go:403] duration metric: took 17.11607512s to StartCluster
	I1109 14:15:15.301437  280419 settings.go:142] acquiring lock: {Name:mk4e77b290eba64589404bd2a5f48c72505ab262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:15.301499  280419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:15:15.303477  280419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:15.303865  280419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 14:15:15.304287  280419 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:15:15.304563  280419 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:15:15.304684  280419 addons.go:70] Setting storage-provisioner=true in profile "kindnet-593530"
	I1109 14:15:15.304706  280419 addons.go:239] Setting addon storage-provisioner=true in "kindnet-593530"
	I1109 14:15:15.304724  280419 addons.go:70] Setting default-storageclass=true in profile "kindnet-593530"
	I1109 14:15:15.304735  280419 host.go:66] Checking if "kindnet-593530" exists ...
	I1109 14:15:15.304739  280419 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-593530"
	I1109 14:15:15.305102  280419 cli_runner.go:164] Run: docker container inspect kindnet-593530 --format={{.State.Status}}
	I1109 14:15:15.305387  280419 config.go:182] Loaded profile config "kindnet-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:15:15.305870  280419 cli_runner.go:164] Run: docker container inspect kindnet-593530 --format={{.State.Status}}
	I1109 14:15:15.308077  280419 out.go:179] * Verifying Kubernetes components...
	I1109 14:15:15.309627  280419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:15:15.349070  280419 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:15:15.350504  280419 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:15:15.350574  280419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:15:15.350738  280419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-593530
	I1109 14:15:15.352024  280419 addons.go:239] Setting addon default-storageclass=true in "kindnet-593530"
	I1109 14:15:15.352134  280419 host.go:66] Checking if "kindnet-593530" exists ...
	I1109 14:15:15.353153  280419 cli_runner.go:164] Run: docker container inspect kindnet-593530 --format={{.State.Status}}
	I1109 14:15:15.388873  280419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/kindnet-593530/id_rsa Username:docker}
	I1109 14:15:15.395087  280419 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:15:15.395108  280419 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:15:15.395252  280419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-593530
	I1109 14:15:15.427397  280419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/kindnet-593530/id_rsa Username:docker}
	I1109 14:15:15.443429  280419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 14:15:15.559988  280419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:15:15.570733  280419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:15:15.573533  280419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:15:15.787685  280419 node_ready.go:35] waiting up to 15m0s for node "kindnet-593530" to be "Ready" ...
	I1109 14:15:15.787972  280419 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1109 14:15:16.178152  280419 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1109 14:15:12.011923  287405 addons.go:515] duration metric: took 2.667762886s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1109 14:15:12.504017  287405 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1109 14:15:12.509069  287405 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1109 14:15:12.510160  287405 api_server.go:141] control plane version: v1.34.1
	I1109 14:15:12.510182  287405 api_server.go:131] duration metric: took 507.056828ms to wait for apiserver health ...
	I1109 14:15:12.510193  287405 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:15:12.513951  287405 system_pods.go:59] 8 kube-system pods found
	I1109 14:15:12.513979  287405 system_pods.go:61] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:12.513988  287405 system_pods.go:61] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:15:12.513995  287405 system_pods.go:61] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:15:12.514002  287405 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:15:12.514008  287405 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:15:12.514013  287405 system_pods.go:61] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1109 14:15:12.514018  287405 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:15:12.514026  287405 system_pods.go:61] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:12.514031  287405 system_pods.go:74] duration metric: took 3.833097ms to wait for pod list to return data ...
	I1109 14:15:12.514041  287405 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:15:12.516369  287405 default_sa.go:45] found service account: "default"
	I1109 14:15:12.516389  287405 default_sa.go:55] duration metric: took 2.34269ms for default service account to be created ...
	I1109 14:15:12.516398  287405 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:15:12.518769  287405 system_pods.go:86] 8 kube-system pods found
	I1109 14:15:12.518795  287405 system_pods.go:89] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:12.518806  287405 system_pods.go:89] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:15:12.518816  287405 system_pods.go:89] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:15:12.518826  287405 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:15:12.518834  287405 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:15:12.518843  287405 system_pods.go:89] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1109 14:15:12.518851  287405 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:15:12.518868  287405 system_pods.go:89] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:12.518891  287405 system_pods.go:126] duration metric: took 2.480872ms to wait for k8s-apps to be running ...
	I1109 14:15:12.518899  287405 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:15:12.518990  287405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:15:12.536796  287405 system_svc.go:56] duration metric: took 17.880309ms WaitForService to wait for kubelet
	I1109 14:15:12.536833  287405 kubeadm.go:587] duration metric: took 3.192786831s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:15:12.536858  287405 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:15:12.539187  287405 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:15:12.539213  287405 node_conditions.go:123] node cpu capacity is 8
	I1109 14:15:12.539226  287405 node_conditions.go:105] duration metric: took 2.362428ms to run NodePressure ...
	I1109 14:15:12.539241  287405 start.go:242] waiting for startup goroutines ...
	I1109 14:15:12.539254  287405 start.go:247] waiting for cluster config update ...
	I1109 14:15:12.539271  287405 start.go:256] writing updated cluster config ...
	I1109 14:15:12.539546  287405 ssh_runner.go:195] Run: rm -f paused
	I1109 14:15:12.543827  287405 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:15:12.546987  287405 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z8lkx" in "kube-system" namespace to be "Ready" or be gone ...
	W1109 14:15:14.552448  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:16.556111  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	I1109 14:15:16.179775  280419 addons.go:515] duration metric: took 875.218856ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1109 14:15:16.296630  280419 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-593530" context rescaled to 1 replicas
	I1109 14:15:15.183466  292305 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-593530:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.858927993s)
	I1109 14:15:15.183557  292305 kic.go:203] duration metric: took 4.85916342s to extract preloaded images to volume ...
	W1109 14:15:15.183742  292305 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1109 14:15:15.183798  292305 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1109 14:15:15.183845  292305 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 14:15:15.278360  292305 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-593530 --name custom-flannel-593530 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-593530 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-593530 --network custom-flannel-593530 --ip 192.168.103.2 --volume custom-flannel-593530:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 14:15:15.791683  292305 cli_runner.go:164] Run: docker container inspect custom-flannel-593530 --format={{.State.Running}}
	I1109 14:15:15.823726  292305 cli_runner.go:164] Run: docker container inspect custom-flannel-593530 --format={{.State.Status}}
	I1109 14:15:15.852750  292305 cli_runner.go:164] Run: docker exec custom-flannel-593530 stat /var/lib/dpkg/alternatives/iptables
	I1109 14:15:15.926392  292305 oci.go:144] the created container "custom-flannel-593530" has a running status.
	I1109 14:15:15.926496  292305 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/custom-flannel-593530/id_rsa...
	I1109 14:15:16.143382  292305 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-5854/.minikube/machines/custom-flannel-593530/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 14:15:16.192826  292305 cli_runner.go:164] Run: docker container inspect custom-flannel-593530 --format={{.State.Status}}
	I1109 14:15:16.230105  292305 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 14:15:16.230125  292305 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-593530 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 14:15:16.307802  292305 cli_runner.go:164] Run: docker container inspect custom-flannel-593530 --format={{.State.Status}}
	I1109 14:15:16.339060  292305 machine.go:94] provisionDockerMachine start ...
	I1109 14:15:16.339158  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:16.365882  292305 main.go:143] libmachine: Using SSH client type: native
	I1109 14:15:16.366245  292305 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33120 <nil> <nil>}
	I1109 14:15:16.366261  292305 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:15:16.367251  292305 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37368->127.0.0.1:33120: read: connection reset by peer
	I1109 14:15:20.489971  285057 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 14:15:20.490049  285057 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:15:20.490188  285057 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 14:15:20.490270  285057 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1109 14:15:20.490317  285057 kubeadm.go:319] OS: Linux
	I1109 14:15:20.490421  285057 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 14:15:20.490501  285057 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 14:15:20.490546  285057 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 14:15:20.490607  285057 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 14:15:20.490689  285057 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 14:15:20.490755  285057 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 14:15:20.490829  285057 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 14:15:20.490899  285057 kubeadm.go:319] CGROUPS_IO: enabled
	I1109 14:15:20.490993  285057 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:15:20.491141  285057 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:15:20.491265  285057 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 14:15:20.491335  285057 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:15:20.492697  285057 out.go:252]   - Generating certificates and keys ...
	I1109 14:15:20.492795  285057 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:15:20.492891  285057 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:15:20.492998  285057 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 14:15:20.493096  285057 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 14:15:20.493217  285057 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 14:15:20.493288  285057 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 14:15:20.493363  285057 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 14:15:20.493513  285057 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-593530 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1109 14:15:20.493584  285057 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 14:15:20.493745  285057 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-593530 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1109 14:15:20.493833  285057 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 14:15:20.493923  285057 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 14:15:20.493983  285057 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 14:15:20.494059  285057 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:15:20.494137  285057 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:15:20.494213  285057 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 14:15:20.494295  285057 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:15:20.494383  285057 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:15:20.494491  285057 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:15:20.494625  285057 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:15:20.494749  285057 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:15:20.496287  285057 out.go:252]   - Booting up control plane ...
	I1109 14:15:20.496383  285057 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:15:20.496489  285057 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:15:20.496580  285057 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:15:20.496745  285057 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:15:20.496885  285057 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 14:15:20.497027  285057 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 14:15:20.497145  285057 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:15:20.497206  285057 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:15:20.497398  285057 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 14:15:20.497544  285057 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 14:15:20.497634  285057 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001183215s
	I1109 14:15:20.497783  285057 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 14:15:20.497899  285057 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1109 14:15:20.498049  285057 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 14:15:20.498180  285057 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 14:15:20.498288  285057 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.18698712s
	I1109 14:15:20.498398  285057 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.533905838s
	I1109 14:15:20.498521  285057 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001883071s
	I1109 14:15:20.498699  285057 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 14:15:20.498867  285057 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 14:15:20.498955  285057 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 14:15:20.499232  285057 kubeadm.go:319] [mark-control-plane] Marking the node calico-593530 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 14:15:20.499328  285057 kubeadm.go:319] [bootstrap-token] Using token: yjsvjs.iphyvpgb7olgu2sq
	I1109 14:15:20.500606  285057 out.go:252]   - Configuring RBAC rules ...
	I1109 14:15:20.500765  285057 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 14:15:20.500886  285057 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 14:15:20.501091  285057 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 14:15:20.501272  285057 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 14:15:20.501443  285057 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 14:15:20.501573  285057 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 14:15:20.501779  285057 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 14:15:20.501846  285057 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 14:15:20.501913  285057 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 14:15:20.501923  285057 kubeadm.go:319] 
	I1109 14:15:20.502013  285057 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 14:15:20.502023  285057 kubeadm.go:319] 
	I1109 14:15:20.502142  285057 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 14:15:20.502151  285057 kubeadm.go:319] 
	I1109 14:15:20.502186  285057 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 14:15:20.502291  285057 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 14:15:20.502371  285057 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 14:15:20.502378  285057 kubeadm.go:319] 
	I1109 14:15:20.502448  285057 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 14:15:20.502464  285057 kubeadm.go:319] 
	I1109 14:15:20.502527  285057 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 14:15:20.502536  285057 kubeadm.go:319] 
	I1109 14:15:20.502601  285057 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 14:15:20.502724  285057 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 14:15:20.502832  285057 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 14:15:20.502846  285057 kubeadm.go:319] 
	I1109 14:15:20.502986  285057 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 14:15:20.503103  285057 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 14:15:20.503113  285057 kubeadm.go:319] 
	I1109 14:15:20.503231  285057 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token yjsvjs.iphyvpgb7olgu2sq \
	I1109 14:15:20.503385  285057 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f \
	I1109 14:15:20.503422  285057 kubeadm.go:319] 	--control-plane 
	I1109 14:15:20.503431  285057 kubeadm.go:319] 
	I1109 14:15:20.503543  285057 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 14:15:20.503553  285057 kubeadm.go:319] 
	I1109 14:15:20.503687  285057 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token yjsvjs.iphyvpgb7olgu2sq \
	I1109 14:15:20.503827  285057 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f 
	I1109 14:15:20.503838  285057 cni.go:84] Creating CNI manager for "calico"
	I1109 14:15:20.507565  285057 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1109 14:15:20.509507  285057 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 14:15:20.509531  285057 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329845 bytes)
	I1109 14:15:20.524304  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 14:15:21.544389  285057 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.020047162s)
	I1109 14:15:21.544429  285057 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:15:21.544571  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:21.544686  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-593530 minikube.k8s.io/updated_at=2025_11_09T14_15_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=calico-593530 minikube.k8s.io/primary=true
	I1109 14:15:21.655688  285057 ops.go:34] apiserver oom_adj: -16
	I1109 14:15:21.655781  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1109 14:15:19.052783  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:21.058845  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:17.791410  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	W1109 14:15:19.792408  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	W1109 14:15:21.792804  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	I1109 14:15:19.509033  292305 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-593530
	
	I1109 14:15:19.509062  292305 ubuntu.go:182] provisioning hostname "custom-flannel-593530"
	I1109 14:15:19.509131  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:19.534864  292305 main.go:143] libmachine: Using SSH client type: native
	I1109 14:15:19.536130  292305 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33120 <nil> <nil>}
	I1109 14:15:19.536154  292305 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-593530 && echo "custom-flannel-593530" | sudo tee /etc/hostname
	I1109 14:15:19.700709  292305 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-593530
	
	I1109 14:15:19.700801  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:19.722041  292305 main.go:143] libmachine: Using SSH client type: native
	I1109 14:15:19.722426  292305 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33120 <nil> <nil>}
	I1109 14:15:19.722454  292305 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-593530' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-593530/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-593530' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:15:19.869356  292305 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:15:19.869395  292305 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-5854/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-5854/.minikube}
	I1109 14:15:19.869419  292305 ubuntu.go:190] setting up certificates
	I1109 14:15:19.869433  292305 provision.go:84] configureAuth start
	I1109 14:15:19.869485  292305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-593530
	I1109 14:15:19.892501  292305 provision.go:143] copyHostCerts
	I1109 14:15:19.892569  292305 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem, removing ...
	I1109 14:15:19.892585  292305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem
	I1109 14:15:19.892751  292305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem (1679 bytes)
	I1109 14:15:19.892940  292305 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem, removing ...
	I1109 14:15:19.892955  292305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem
	I1109 14:15:19.893768  292305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem (1078 bytes)
	I1109 14:15:19.893998  292305 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem, removing ...
	I1109 14:15:19.894016  292305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem
	I1109 14:15:19.894067  292305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem (1123 bytes)
	I1109 14:15:19.894169  292305 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-593530 san=[127.0.0.1 192.168.103.2 custom-flannel-593530 localhost minikube]
	I1109 14:15:20.152110  292305 provision.go:177] copyRemoteCerts
	I1109 14:15:20.152180  292305 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:15:20.152294  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:20.178378  292305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/custom-flannel-593530/id_rsa Username:docker}
	I1109 14:15:20.280436  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 14:15:20.303490  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1109 14:15:20.326112  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:15:20.348207  292305 provision.go:87] duration metric: took 478.760759ms to configureAuth
	I1109 14:15:20.348237  292305 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:15:20.348418  292305 config.go:182] Loaded profile config "custom-flannel-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:15:20.348540  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:20.370860  292305 main.go:143] libmachine: Using SSH client type: native
	I1109 14:15:20.371177  292305 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33120 <nil> <nil>}
	I1109 14:15:20.371202  292305 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:15:20.650578  292305 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:15:20.650604  292305 machine.go:97] duration metric: took 4.311520822s to provisionDockerMachine
	I1109 14:15:20.650617  292305 client.go:176] duration metric: took 11.085624819s to LocalClient.Create
	I1109 14:15:20.650630  292305 start.go:167] duration metric: took 11.08567915s to libmachine.API.Create "custom-flannel-593530"
	I1109 14:15:20.650665  292305 start.go:293] postStartSetup for "custom-flannel-593530" (driver="docker")
	I1109 14:15:20.650680  292305 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:15:20.650755  292305 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:15:20.650820  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:20.675138  292305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/custom-flannel-593530/id_rsa Username:docker}
	I1109 14:15:20.784389  292305 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:15:20.789100  292305 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:15:20.789133  292305 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:15:20.789146  292305 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/addons for local assets ...
	I1109 14:15:20.789200  292305 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/files for local assets ...
	I1109 14:15:20.789306  292305 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem -> 93652.pem in /etc/ssl/certs
	I1109 14:15:20.789423  292305 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:15:20.799678  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:15:20.826942  292305 start.go:296] duration metric: took 176.260456ms for postStartSetup
	I1109 14:15:20.827473  292305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-593530
	I1109 14:15:20.853401  292305 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/config.json ...
	I1109 14:15:20.853846  292305 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:15:20.853909  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:20.880560  292305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/custom-flannel-593530/id_rsa Username:docker}
	I1109 14:15:20.985667  292305 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:15:20.991818  292305 start.go:128] duration metric: took 11.42871736s to createHost
	I1109 14:15:20.991872  292305 start.go:83] releasing machines lock for "custom-flannel-593530", held for 11.428883095s
	I1109 14:15:20.991956  292305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-593530
	I1109 14:15:21.018112  292305 ssh_runner.go:195] Run: cat /version.json
	I1109 14:15:21.018179  292305 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:15:21.018195  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:21.018261  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:21.043668  292305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/custom-flannel-593530/id_rsa Username:docker}
	I1109 14:15:21.047050  292305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/custom-flannel-593530/id_rsa Username:docker}
	I1109 14:15:21.149970  292305 ssh_runner.go:195] Run: systemctl --version
	I1109 14:15:21.237272  292305 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:15:21.292179  292305 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:15:21.300946  292305 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:15:21.301014  292305 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:15:21.349114  292305 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1109 14:15:21.349142  292305 start.go:496] detecting cgroup driver to use...
	I1109 14:15:21.349177  292305 detect.go:190] detected "systemd" cgroup driver on host os
	I1109 14:15:21.349229  292305 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:15:21.373055  292305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:15:21.393167  292305 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:15:21.393343  292305 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:15:21.420863  292305 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:15:21.457951  292305 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:15:21.581499  292305 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:15:21.705285  292305 docker.go:234] disabling docker service ...
	I1109 14:15:21.705373  292305 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:15:21.730862  292305 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:15:21.749073  292305 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:15:21.881211  292305 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:15:21.999688  292305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:15:22.017667  292305 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:15:22.034631  292305 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:15:22.034707  292305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:15:22.134259  292305 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1109 14:15:22.134338  292305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:15:22.167652  292305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:15:22.179909  292305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:15:22.216274  292305 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:15:22.226166  292305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:15:22.244315  292305 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:15:22.293992  292305 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:15:22.304234  292305 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:15:22.311974  292305 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:15:22.319571  292305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:15:22.409241  292305 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:15:23.021268  292305 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:15:23.021335  292305 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:15:23.025716  292305 start.go:564] Will wait 60s for crictl version
	I1109 14:15:23.025767  292305 ssh_runner.go:195] Run: which crictl
	I1109 14:15:23.029924  292305 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:15:23.056756  292305 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:15:23.056835  292305 ssh_runner.go:195] Run: crio --version
	I1109 14:15:23.083822  292305 ssh_runner.go:195] Run: crio --version
	I1109 14:15:23.111840  292305 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:15:23.112812  292305 cli_runner.go:164] Run: docker network inspect custom-flannel-593530 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:15:23.129398  292305 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1109 14:15:23.133371  292305 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:15:23.144139  292305 kubeadm.go:884] updating cluster {Name:custom-flannel-593530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-593530 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCore
DNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:15:23.144265  292305 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:15:23.144312  292305 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:15:23.176128  292305 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:15:23.176149  292305 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:15:23.176195  292305 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:15:23.202207  292305 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:15:23.202230  292305 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:15:23.202239  292305 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1109 14:15:23.202354  292305 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-593530 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-593530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1109 14:15:23.202432  292305 ssh_runner.go:195] Run: crio config
	I1109 14:15:23.249086  292305 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1109 14:15:23.249126  292305 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:15:23.249153  292305 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-593530 NodeName:custom-flannel-593530 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:15:23.249292  292305 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-593530"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:15:23.249347  292305 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:15:23.258175  292305 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:15:23.258228  292305 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:15:23.265460  292305 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1109 14:15:23.277235  292305 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:15:23.291948  292305 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1109 14:15:23.303747  292305 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:15:23.307115  292305 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:15:23.316244  292305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:15:23.396667  292305 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:15:23.417622  292305 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530 for IP: 192.168.103.2
	I1109 14:15:23.417652  292305 certs.go:195] generating shared ca certs ...
	I1109 14:15:23.417670  292305 certs.go:227] acquiring lock for ca certs: {Name:mk1fbc7fb5aaf0e87090d75145f91c095ef07289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:23.417825  292305 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key
	I1109 14:15:23.417874  292305 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key
	I1109 14:15:23.417887  292305 certs.go:257] generating profile certs ...
	I1109 14:15:23.417955  292305 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/client.key
	I1109 14:15:23.417971  292305 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/client.crt with IP's: []
	I1109 14:15:23.475470  292305 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/client.crt ...
	I1109 14:15:23.475495  292305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/client.crt: {Name:mk6cc8a56c5a7e03bae4f26e654eb21732b60f96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:23.475666  292305 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/client.key ...
	I1109 14:15:23.475688  292305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/client.key: {Name:mkd32921880ae6490d9b36f6589b11af2e82bda4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:23.475808  292305 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.key.bd96a32b
	I1109 14:15:23.475837  292305 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.crt.bd96a32b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1109 14:15:23.507789  292305 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.crt.bd96a32b ...
	I1109 14:15:23.507814  292305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.crt.bd96a32b: {Name:mkea1975b9862b4f62d0e1cfe3f59dac63fdc488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:23.507963  292305 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.key.bd96a32b ...
	I1109 14:15:23.507982  292305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.key.bd96a32b: {Name:mkd8316faf286f0a2a7f529b2fea1fdabd61ffa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:23.508079  292305 certs.go:382] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.crt.bd96a32b -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.crt
	I1109 14:15:23.508183  292305 certs.go:386] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.key.bd96a32b -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.key
	I1109 14:15:23.508266  292305 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/proxy-client.key
	I1109 14:15:23.508290  292305 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/proxy-client.crt with IP's: []
	I1109 14:15:24.163784  292305 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/proxy-client.crt ...
	I1109 14:15:24.163808  292305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/proxy-client.crt: {Name:mkdc1d9208a395139efe0f54f1eb35bd3a932934 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:24.163955  292305 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/proxy-client.key ...
	I1109 14:15:24.163970  292305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/proxy-client.key: {Name:mka27109506b5085edf8a42f4a73129a9eb93eb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:24.164130  292305 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem (1338 bytes)
	W1109 14:15:24.164173  292305 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365_empty.pem, impossibly tiny 0 bytes
	I1109 14:15:24.164187  292305 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:15:24.164217  292305 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 14:15:24.164244  292305 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:15:24.164265  292305 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 14:15:24.164303  292305 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:15:24.164883  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:15:24.182788  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:15:24.200476  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:15:24.220455  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:15:24.237975  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1109 14:15:22.156614  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:22.656717  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:23.156239  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:23.656838  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:24.156499  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:24.656141  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:25.156834  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:25.656459  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:25.739531  285057 kubeadm.go:1114] duration metric: took 4.195008704s to wait for elevateKubeSystemPrivileges
	I1109 14:15:25.739565  285057 kubeadm.go:403] duration metric: took 18.579357042s to StartCluster
	I1109 14:15:25.739586  285057 settings.go:142] acquiring lock: {Name:mk4e77b290eba64589404bd2a5f48c72505ab262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:25.739699  285057 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:15:25.741526  285057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:25.741787  285057 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:15:25.741826  285057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 14:15:25.742027  285057 config.go:182] Loaded profile config "calico-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:15:25.741976  285057 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:15:25.742052  285057 addons.go:70] Setting storage-provisioner=true in profile "calico-593530"
	I1109 14:15:25.742069  285057 addons.go:70] Setting default-storageclass=true in profile "calico-593530"
	I1109 14:15:25.742071  285057 addons.go:239] Setting addon storage-provisioner=true in "calico-593530"
	I1109 14:15:25.742081  285057 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-593530"
	I1109 14:15:25.742103  285057 host.go:66] Checking if "calico-593530" exists ...
	I1109 14:15:25.742807  285057 cli_runner.go:164] Run: docker container inspect calico-593530 --format={{.State.Status}}
	I1109 14:15:25.743008  285057 cli_runner.go:164] Run: docker container inspect calico-593530 --format={{.State.Status}}
	I1109 14:15:25.745013  285057 out.go:179] * Verifying Kubernetes components...
	I1109 14:15:25.746081  285057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:15:25.771718  285057 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:15:25.772793  285057 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:15:25.772810  285057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:15:25.772865  285057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-593530
	I1109 14:15:25.774122  285057 addons.go:239] Setting addon default-storageclass=true in "calico-593530"
	I1109 14:15:25.774172  285057 host.go:66] Checking if "calico-593530" exists ...
	I1109 14:15:25.774670  285057 cli_runner.go:164] Run: docker container inspect calico-593530 --format={{.State.Status}}
	I1109 14:15:25.801675  285057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/calico-593530/id_rsa Username:docker}
	I1109 14:15:25.806922  285057 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:15:25.806945  285057 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:15:25.807008  285057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-593530
	I1109 14:15:25.835285  285057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/calico-593530/id_rsa Username:docker}
	I1109 14:15:25.874911  285057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 14:15:25.927388  285057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:15:25.938215  285057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:15:25.951126  285057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:15:26.058735  285057 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1109 14:15:26.230438  285057 node_ready.go:35] waiting up to 15m0s for node "calico-593530" to be "Ready" ...
	I1109 14:15:26.234893  285057 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1109 14:15:26.235861  285057 addons.go:515] duration metric: took 493.881381ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1109 14:15:26.563938  285057 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-593530" context rescaled to 1 replicas
	W1109 14:15:23.551954  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:25.552357  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:24.291655  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	W1109 14:15:26.791414  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	I1109 14:15:24.257331  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:15:24.273901  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:15:24.290399  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:15:24.307504  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem --> /usr/share/ca-certificates/9365.pem (1338 bytes)
	I1109 14:15:24.324968  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /usr/share/ca-certificates/93652.pem (1708 bytes)
	I1109 14:15:24.341724  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:15:24.358442  292305 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:15:24.370001  292305 ssh_runner.go:195] Run: openssl version
	I1109 14:15:24.375754  292305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9365.pem && ln -fs /usr/share/ca-certificates/9365.pem /etc/ssl/certs/9365.pem"
	I1109 14:15:24.383447  292305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9365.pem
	I1109 14:15:24.386799  292305 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:34 /usr/share/ca-certificates/9365.pem
	I1109 14:15:24.386842  292305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9365.pem
	I1109 14:15:24.421676  292305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9365.pem /etc/ssl/certs/51391683.0"
	I1109 14:15:24.429446  292305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93652.pem && ln -fs /usr/share/ca-certificates/93652.pem /etc/ssl/certs/93652.pem"
	I1109 14:15:24.437149  292305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93652.pem
	I1109 14:15:24.440861  292305 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:34 /usr/share/ca-certificates/93652.pem
	I1109 14:15:24.440909  292305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93652.pem
	I1109 14:15:24.495759  292305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93652.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:15:24.504738  292305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:15:24.512949  292305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:15:24.516510  292305 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:15:24.516556  292305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:15:24.555449  292305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:15:24.564580  292305 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:15:24.568327  292305 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:15:24.568395  292305 kubeadm.go:401] StartCluster: {Name:custom-flannel-593530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-593530 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:15:24.568463  292305 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:15:24.568515  292305 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:15:24.595272  292305 cri.go:89] found id: ""
	I1109 14:15:24.595332  292305 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:15:24.603201  292305 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:15:24.611034  292305 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:15:24.611084  292305 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:15:24.618591  292305 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:15:24.618605  292305 kubeadm.go:158] found existing configuration files:
	
	I1109 14:15:24.618635  292305 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 14:15:24.625810  292305 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:15:24.625860  292305 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:15:24.632814  292305 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 14:15:24.640133  292305 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:15:24.640173  292305 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:15:24.647571  292305 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 14:15:24.654499  292305 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:15:24.654542  292305 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:15:24.661867  292305 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 14:15:24.668941  292305 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:15:24.668982  292305 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:15:24.676006  292305 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:15:24.737957  292305 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1109 14:15:24.795866  292305 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1109 14:15:28.233281  285057 node_ready.go:57] node "calico-593530" has "Ready":"False" status (will retry)
	W1109 14:15:30.233963  285057 node_ready.go:57] node "calico-593530" has "Ready":"False" status (will retry)
	I1109 14:15:30.734928  285057 node_ready.go:49] node "calico-593530" is "Ready"
	I1109 14:15:30.734960  285057 node_ready.go:38] duration metric: took 4.50449231s for node "calico-593530" to be "Ready" ...
	I1109 14:15:30.734976  285057 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:15:30.735037  285057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:15:30.754867  285057 api_server.go:72] duration metric: took 5.013042529s to wait for apiserver process to appear ...
	I1109 14:15:30.754903  285057 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:15:30.754925  285057 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1109 14:15:30.767548  285057 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1109 14:15:30.769336  285057 api_server.go:141] control plane version: v1.34.1
	I1109 14:15:30.769394  285057 api_server.go:131] duration metric: took 14.481552ms to wait for apiserver health ...
	I1109 14:15:30.769411  285057 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:15:30.776264  285057 system_pods.go:59] 9 kube-system pods found
	I1109 14:15:30.776311  285057 system_pods.go:61] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:30.776328  285057 system_pods.go:61] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [install-cni ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:30.776337  285057 system_pods.go:61] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:30.776347  285057 system_pods.go:61] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:30.776352  285057 system_pods.go:61] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:30.776359  285057 system_pods.go:61] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:15:30.776363  285057 system_pods.go:61] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:30.776368  285057 system_pods.go:61] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:30.776374  285057 system_pods.go:61] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:30.776382  285057 system_pods.go:74] duration metric: took 6.946108ms to wait for pod list to return data ...
	I1109 14:15:30.776392  285057 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:15:30.779990  285057 default_sa.go:45] found service account: "default"
	I1109 14:15:30.780014  285057 default_sa.go:55] duration metric: took 3.615627ms for default service account to be created ...
	I1109 14:15:30.780026  285057 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:15:30.861040  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:30.861076  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:30.861089  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:30.861106  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:30.861113  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:30.861119  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:30.861134  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:15:30.861142  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:30.861149  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:30.861158  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:30.861206  285057 retry.go:31] will retry after 222.389521ms: missing components: kube-dns
	I1109 14:15:31.088074  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:31.088116  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:31.088127  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:31.088181  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:31.088194  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:31.088202  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:31.088207  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:15:31.088211  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:31.088214  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:31.088217  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:31.088233  285057 retry.go:31] will retry after 259.900062ms: missing components: kube-dns
	I1109 14:15:31.356775  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:31.356818  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:31.356838  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:31.356854  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:31.356863  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:31.356871  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:31.356879  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:15:31.356885  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:31.356896  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:31.356902  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:31.356919  285057 retry.go:31] will retry after 380.857905ms: missing components: kube-dns
	W1109 14:15:27.553578  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:30.053185  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:29.291525  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	W1109 14:15:31.293205  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	I1109 14:15:34.760952  292305 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 14:15:34.761035  292305 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:15:34.761189  292305 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 14:15:34.761308  292305 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1109 14:15:34.761393  292305 kubeadm.go:319] OS: Linux
	I1109 14:15:34.761469  292305 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 14:15:34.761536  292305 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 14:15:34.761631  292305 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 14:15:34.761717  292305 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 14:15:34.761788  292305 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 14:15:34.761854  292305 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 14:15:34.761930  292305 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 14:15:34.761994  292305 kubeadm.go:319] CGROUPS_IO: enabled
	I1109 14:15:34.762086  292305 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:15:34.762214  292305 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:15:34.762345  292305 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 14:15:34.762435  292305 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:15:34.853035  292305 out.go:252]   - Generating certificates and keys ...
	I1109 14:15:34.853151  292305 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:15:34.853250  292305 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:15:34.853367  292305 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 14:15:34.853461  292305 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 14:15:34.853555  292305 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 14:15:34.853624  292305 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 14:15:34.853714  292305 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 14:15:34.853887  292305 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-593530 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1109 14:15:34.853992  292305 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 14:15:34.854205  292305 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-593530 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1109 14:15:34.854312  292305 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 14:15:34.854404  292305 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 14:15:34.854470  292305 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 14:15:34.854551  292305 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:15:34.854628  292305 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:15:34.854738  292305 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 14:15:34.854807  292305 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:15:34.854936  292305 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:15:34.855040  292305 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:15:34.855177  292305 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:15:34.855276  292305 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:15:35.003588  292305 out.go:252]   - Booting up control plane ...
	I1109 14:15:35.003746  292305 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:15:35.003873  292305 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:15:35.003979  292305 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:15:35.004143  292305 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:15:35.004300  292305 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 14:15:35.004467  292305 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 14:15:35.004606  292305 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:15:35.004688  292305 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:15:35.004878  292305 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 14:15:35.005037  292305 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 14:15:35.005130  292305 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001593024s
	I1109 14:15:35.005279  292305 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 14:15:35.005395  292305 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1109 14:15:35.005533  292305 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 14:15:35.005675  292305 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 14:15:35.005783  292305 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.592199819s
	I1109 14:15:35.005876  292305 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.59222063s
	I1109 14:15:35.005980  292305 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.50174081s
	I1109 14:15:35.006127  292305 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 14:15:35.006295  292305 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 14:15:35.006376  292305 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 14:15:35.006690  292305 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-593530 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 14:15:35.006770  292305 kubeadm.go:319] [bootstrap-token] Using token: j7ym4d.0t4svojy4g5mzhlf
	I1109 14:15:35.046560  292305 out.go:252]   - Configuring RBAC rules ...
	I1109 14:15:35.046804  292305 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 14:15:35.046919  292305 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 14:15:35.047109  292305 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 14:15:35.047282  292305 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 14:15:35.047479  292305 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 14:15:35.047584  292305 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 14:15:35.047749  292305 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 14:15:35.047823  292305 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 14:15:35.049090  292305 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 14:15:35.049104  292305 kubeadm.go:319] 
	I1109 14:15:35.049183  292305 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 14:15:35.049193  292305 kubeadm.go:319] 
	I1109 14:15:35.049312  292305 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 14:15:35.049320  292305 kubeadm.go:319] 
	I1109 14:15:35.049369  292305 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 14:15:35.049569  292305 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 14:15:35.049762  292305 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 14:15:35.049823  292305 kubeadm.go:319] 
	I1109 14:15:35.049902  292305 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 14:15:35.049914  292305 kubeadm.go:319] 
	I1109 14:15:35.049985  292305 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 14:15:35.049991  292305 kubeadm.go:319] 
	I1109 14:15:35.050071  292305 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 14:15:35.050363  292305 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 14:15:35.050547  292305 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 14:15:35.050571  292305 kubeadm.go:319] 
	I1109 14:15:35.050735  292305 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 14:15:35.050950  292305 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 14:15:35.050963  292305 kubeadm.go:319] 
	I1109 14:15:35.051066  292305 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token j7ym4d.0t4svojy4g5mzhlf \
	I1109 14:15:35.052172  292305 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f \
	I1109 14:15:35.052249  292305 kubeadm.go:319] 	--control-plane 
	I1109 14:15:35.052261  292305 kubeadm.go:319] 
	I1109 14:15:35.052378  292305 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 14:15:35.052388  292305 kubeadm.go:319] 
	I1109 14:15:35.052505  292305 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token j7ym4d.0t4svojy4g5mzhlf \
	I1109 14:15:35.052710  292305 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f 
	I1109 14:15:35.052728  292305 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1109 14:15:35.055275  292305 out.go:179] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I1109 14:15:31.741481  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:31.741516  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:31.741529  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:31.741534  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:31.741538  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:31.741544  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:31.741550  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:15:31.741558  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:31.741563  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:31.741568  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:31.741585  285057 retry.go:31] will retry after 380.777126ms: missing components: kube-dns
	I1109 14:15:32.129801  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:32.129926  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:32.129943  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:32.129952  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:32.129959  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:32.129967  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:32.129975  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:15:32.129981  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:32.130014  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:32.130030  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:32.130056  285057 retry.go:31] will retry after 658.546064ms: missing components: kube-dns
	I1109 14:15:32.792973  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:32.793012  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:32.793023  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:32.793034  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:32.793040  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:32.793048  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:32.793056  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:15:32.793061  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:32.793066  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:32.793071  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:32.793087  285057 retry.go:31] will retry after 852.732952ms: missing components: kube-dns
	I1109 14:15:33.651061  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:33.651101  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:33.651115  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:33.651131  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:33.651138  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:33.651149  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:33.651157  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:15:33.651166  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:33.651172  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:33.651180  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:33.651198  285057 retry.go:31] will retry after 882.469174ms: missing components: kube-dns
	I1109 14:15:34.538792  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:34.538823  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:34.538832  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:34.538838  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:34.538843  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:34.538848  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:34.538851  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running
	I1109 14:15:34.538857  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:34.538860  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:34.538864  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:34.538877  285057 retry.go:31] will retry after 1.018334092s: missing components: kube-dns
	I1109 14:15:35.562102  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:35.562134  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:35.562148  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:35.562158  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:35.562168  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:35.562176  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:35.562181  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running
	I1109 14:15:35.562190  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:35.562196  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:35.562204  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:35.562222  285057 retry.go:31] will retry after 1.779834697s: missing components: kube-dns
	W1109 14:15:32.553177  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:35.054319  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:33.791388  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	W1109 14:15:36.291162  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	I1109 14:15:35.056434  292305 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 14:15:35.056494  292305 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1109 14:15:35.061959  292305 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1109 14:15:35.061986  292305 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I1109 14:15:35.086158  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 14:15:35.460740  292305 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:15:35.460827  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:35.460871  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-593530 minikube.k8s.io/updated_at=2025_11_09T14_15_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=custom-flannel-593530 minikube.k8s.io/primary=true
	I1109 14:15:35.473013  292305 ops.go:34] apiserver oom_adj: -16
	I1109 14:15:35.560089  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:36.060333  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:36.561021  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:37.060850  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:37.560914  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:38.060839  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:38.560383  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:39.061005  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:39.560650  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:39.641155  292305 kubeadm.go:1114] duration metric: took 4.180391897s to wait for elevateKubeSystemPrivileges
	I1109 14:15:39.641195  292305 kubeadm.go:403] duration metric: took 15.072805775s to StartCluster
	I1109 14:15:39.641214  292305 settings.go:142] acquiring lock: {Name:mk4e77b290eba64589404bd2a5f48c72505ab262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:39.641288  292305 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:15:39.643360  292305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:39.643611  292305 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 14:15:39.643621  292305 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:15:39.643710  292305 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:15:39.643853  292305 config.go:182] Loaded profile config "custom-flannel-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:15:39.643857  292305 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-593530"
	I1109 14:15:39.643883  292305 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-593530"
	I1109 14:15:39.643906  292305 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-593530"
	I1109 14:15:39.643919  292305 host.go:66] Checking if "custom-flannel-593530" exists ...
	I1109 14:15:39.643929  292305 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-593530"
	I1109 14:15:39.644473  292305 cli_runner.go:164] Run: docker container inspect custom-flannel-593530 --format={{.State.Status}}
	I1109 14:15:39.644552  292305 cli_runner.go:164] Run: docker container inspect custom-flannel-593530 --format={{.State.Status}}
	I1109 14:15:39.645098  292305 out.go:179] * Verifying Kubernetes components...
	I1109 14:15:39.646129  292305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:15:39.668733  292305 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:15:39.669229  292305 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-593530"
	I1109 14:15:39.669271  292305 host.go:66] Checking if "custom-flannel-593530" exists ...
	I1109 14:15:39.669786  292305 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:15:39.669808  292305 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:15:39.669853  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:39.669788  292305 cli_runner.go:164] Run: docker container inspect custom-flannel-593530 --format={{.State.Status}}
	I1109 14:15:39.697979  292305 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:15:39.698001  292305 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:15:39.698133  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:39.699215  292305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/custom-flannel-593530/id_rsa Username:docker}
	I1109 14:15:39.722993  292305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/custom-flannel-593530/id_rsa Username:docker}
	I1109 14:15:39.749350  292305 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 14:15:39.809675  292305 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:15:39.819927  292305 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:15:39.857165  292305 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:15:39.994216  292305 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1109 14:15:39.995748  292305 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-593530" to be "Ready" ...
	I1109 14:15:40.350538  292305 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1109 14:15:37.346530  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:37.346571  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:37.346582  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:37.346594  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:37.346619  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:37.346630  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:37.346636  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running
	I1109 14:15:37.346653  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:37.346658  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:37.346663  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:37.346688  285057 retry.go:31] will retry after 1.732906923s: missing components: kube-dns
	I1109 14:15:39.084388  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:39.084425  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:39.084433  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:39.084442  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:39.084447  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:39.084452  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:39.084455  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running
	I1109 14:15:39.084460  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:39.084465  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:39.084470  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:39.084486  285057 retry.go:31] will retry after 1.849866542s: missing components: kube-dns
	I1109 14:15:40.938306  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:40.938336  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:40.938343  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:40.938350  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:40.938354  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:40.938358  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:40.938361  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running
	I1109 14:15:40.938365  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:40.938370  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:40.938373  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:40.938385  285057 retry.go:31] will retry after 3.175085388s: missing components: kube-dns
	W1109 14:15:37.551964  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:40.053058  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:38.293327  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	W1109 14:15:40.791137  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	I1109 14:15:40.352482  292305 addons.go:515] duration metric: took 708.767967ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1109 14:15:40.500005  292305 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-593530" context rescaled to 1 replicas
	W1109 14:15:41.999207  292305 node_ready.go:57] node "custom-flannel-593530" has "Ready":"False" status (will retry)
	I1109 14:15:43.498508  292305 node_ready.go:49] node "custom-flannel-593530" is "Ready"
	I1109 14:15:43.498532  292305 node_ready.go:38] duration metric: took 3.502752606s for node "custom-flannel-593530" to be "Ready" ...
	I1109 14:15:43.498546  292305 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:15:43.498592  292305 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:15:43.510170  292305 api_server.go:72] duration metric: took 3.866491794s to wait for apiserver process to appear ...
	I1109 14:15:43.510192  292305 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:15:43.510207  292305 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1109 14:15:43.514730  292305 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1109 14:15:43.515441  292305 api_server.go:141] control plane version: v1.34.1
	I1109 14:15:43.515462  292305 api_server.go:131] duration metric: took 5.265462ms to wait for apiserver health ...
	I1109 14:15:43.515470  292305 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:15:43.518520  292305 system_pods.go:59] 7 kube-system pods found
	I1109 14:15:43.518552  292305 system_pods.go:61] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:43.518561  292305 system_pods.go:61] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:43.518569  292305 system_pods.go:61] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:15:43.518574  292305 system_pods.go:61] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:43.518578  292305 system_pods.go:61] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:43.518583  292305 system_pods.go:61] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:15:43.518588  292305 system_pods.go:61] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:43.518593  292305 system_pods.go:74] duration metric: took 3.118435ms to wait for pod list to return data ...
	I1109 14:15:43.518599  292305 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:15:43.520498  292305 default_sa.go:45] found service account: "default"
	I1109 14:15:43.520515  292305 default_sa.go:55] duration metric: took 1.910237ms for default service account to be created ...
	I1109 14:15:43.520524  292305 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:15:43.522862  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:43.522884  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:43.522891  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:43.522901  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:15:43.522907  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:43.522912  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:43.522929  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:15:43.522934  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:43.522951  292305 retry.go:31] will retry after 271.966763ms: missing components: kube-dns
	I1109 14:15:43.798286  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:43.798325  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:43.798334  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:43.798345  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:15:43.798351  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:43.798359  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:43.798366  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:15:43.798372  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:43.798396  292305 retry.go:31] will retry after 248.517234ms: missing components: kube-dns
	I1109 14:15:44.051393  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:44.051428  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:44.051434  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:44.051441  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:15:44.051446  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:44.051452  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:44.051458  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:15:44.051465  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:44.051485  292305 retry.go:31] will retry after 307.177206ms: missing components: kube-dns
	W1109 14:15:42.055199  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	I1109 14:15:44.053379  287405 pod_ready.go:94] pod "coredns-66bc5c9577-z8lkx" is "Ready"
	I1109 14:15:44.053403  287405 pod_ready.go:86] duration metric: took 31.506392424s for pod "coredns-66bc5c9577-z8lkx" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:44.055835  287405 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:44.059556  287405 pod_ready.go:94] pod "etcd-default-k8s-diff-port-326524" is "Ready"
	I1109 14:15:44.059581  287405 pod_ready.go:86] duration metric: took 3.725825ms for pod "etcd-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:44.061759  287405 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:44.065473  287405 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-326524" is "Ready"
	I1109 14:15:44.065494  287405 pod_ready.go:86] duration metric: took 3.713918ms for pod "kube-apiserver-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:44.067343  287405 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:44.250877  287405 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-326524" is "Ready"
	I1109 14:15:44.250902  287405 pod_ready.go:86] duration metric: took 183.538136ms for pod "kube-controller-manager-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:44.450973  287405 pod_ready.go:83] waiting for pod "kube-proxy-n95wb" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:44.851555  287405 pod_ready.go:94] pod "kube-proxy-n95wb" is "Ready"
	I1109 14:15:44.851581  287405 pod_ready.go:86] duration metric: took 400.585297ms for pod "kube-proxy-n95wb" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:45.050455  287405 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:45.451204  287405 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-326524" is "Ready"
	I1109 14:15:45.451228  287405 pod_ready.go:86] duration metric: took 400.750017ms for pod "kube-scheduler-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:45.451239  287405 pod_ready.go:40] duration metric: took 32.907381754s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:15:45.496002  287405 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:15:45.497770  287405 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-326524" cluster and "default" namespace by default
	I1109 14:15:44.119430  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:44.119460  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:44.119469  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:44.119477  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:44.119481  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:44.119485  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:44.119488  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running
	I1109 14:15:44.119492  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:44.119495  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:44.119498  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:44.119510  285057 retry.go:31] will retry after 4.333587155s: missing components: kube-dns
	W1109 14:15:43.290975  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	W1109 14:15:45.291386  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	I1109 14:15:44.363012  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:44.363047  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:44.363053  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:44.363058  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running
	I1109 14:15:44.363063  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:44.363066  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:44.363078  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:15:44.363082  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Running
	I1109 14:15:44.363102  292305 retry.go:31] will retry after 593.567401ms: missing components: kube-dns
	I1109 14:15:44.960309  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:44.960362  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:44.960372  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:44.960385  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running
	I1109 14:15:44.960397  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:44.960402  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:44.960408  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running
	I1109 14:15:44.960415  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Running
	I1109 14:15:44.960433  292305 retry.go:31] will retry after 649.59511ms: missing components: kube-dns
	I1109 14:15:45.614668  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:45.614707  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:45.614716  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:45.614724  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running
	I1109 14:15:45.614730  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:45.614735  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:45.614746  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running
	I1109 14:15:45.614751  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Running
	I1109 14:15:45.614772  292305 retry.go:31] will retry after 928.305564ms: missing components: kube-dns
	I1109 14:15:46.547048  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:46.547085  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:46.547094  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:46.547102  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running
	I1109 14:15:46.547108  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:46.547113  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:46.547118  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running
	I1109 14:15:46.547123  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Running
	I1109 14:15:46.547142  292305 retry.go:31] will retry after 1.104834349s: missing components: kube-dns
	I1109 14:15:47.657070  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:47.657132  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:47.657140  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:47.657160  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running
	I1109 14:15:47.657176  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:47.657181  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:47.657186  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running
	I1109 14:15:47.657195  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Running
	I1109 14:15:47.657214  292305 retry.go:31] will retry after 1.315228447s: missing components: kube-dns
	I1109 14:15:48.976003  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:48.976050  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:48.976059  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:48.976067  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running
	I1109 14:15:48.976074  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:48.976078  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:48.976082  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running
	I1109 14:15:48.976087  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Running
	I1109 14:15:48.976106  292305 retry.go:31] will retry after 1.836787676s: missing components: kube-dns
	I1109 14:15:48.457511  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:48.457542  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Running / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:48.457550  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:48.457555  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Running
	I1109 14:15:48.457559  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:48.457562  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:48.457567  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running
	I1109 14:15:48.457570  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:48.457573  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:48.457576  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:48.457585  285057 system_pods.go:126] duration metric: took 17.677552743s to wait for k8s-apps to be running ...
	I1109 14:15:48.457594  285057 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:15:48.457631  285057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:15:48.470390  285057 system_svc.go:56] duration metric: took 12.787082ms WaitForService to wait for kubelet
	I1109 14:15:48.470423  285057 kubeadm.go:587] duration metric: took 22.728603366s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:15:48.470440  285057 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:15:48.473267  285057 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:15:48.473290  285057 node_conditions.go:123] node cpu capacity is 8
	I1109 14:15:48.473302  285057 node_conditions.go:105] duration metric: took 2.858053ms to run NodePressure ...
	I1109 14:15:48.473315  285057 start.go:242] waiting for startup goroutines ...
	I1109 14:15:48.473324  285057 start.go:247] waiting for cluster config update ...
	I1109 14:15:48.473344  285057 start.go:256] writing updated cluster config ...
	I1109 14:15:48.473612  285057 ssh_runner.go:195] Run: rm -f paused
	I1109 14:15:48.477201  285057 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:15:48.480230  285057 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ng52f" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:48.484134  285057 pod_ready.go:94] pod "coredns-66bc5c9577-ng52f" is "Ready"
	I1109 14:15:48.484158  285057 pod_ready.go:86] duration metric: took 3.908127ms for pod "coredns-66bc5c9577-ng52f" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:48.486134  285057 pod_ready.go:83] waiting for pod "etcd-calico-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:48.489524  285057 pod_ready.go:94] pod "etcd-calico-593530" is "Ready"
	I1109 14:15:48.489546  285057 pod_ready.go:86] duration metric: took 3.392491ms for pod "etcd-calico-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:48.493995  285057 pod_ready.go:83] waiting for pod "kube-apiserver-calico-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:48.497494  285057 pod_ready.go:94] pod "kube-apiserver-calico-593530" is "Ready"
	I1109 14:15:48.497514  285057 pod_ready.go:86] duration metric: took 3.498625ms for pod "kube-apiserver-calico-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:48.499393  285057 pod_ready.go:83] waiting for pod "kube-controller-manager-calico-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:48.880835  285057 pod_ready.go:94] pod "kube-controller-manager-calico-593530" is "Ready"
	I1109 14:15:48.880866  285057 pod_ready.go:86] duration metric: took 381.451946ms for pod "kube-controller-manager-calico-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:49.081749  285057 pod_ready.go:83] waiting for pod "kube-proxy-bvdm9" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:49.481377  285057 pod_ready.go:94] pod "kube-proxy-bvdm9" is "Ready"
	I1109 14:15:49.481404  285057 pod_ready.go:86] duration metric: took 399.632087ms for pod "kube-proxy-bvdm9" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:49.681423  285057 pod_ready.go:83] waiting for pod "kube-scheduler-calico-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:50.081224  285057 pod_ready.go:94] pod "kube-scheduler-calico-593530" is "Ready"
	I1109 14:15:50.081246  285057 pod_ready.go:86] duration metric: took 399.800182ms for pod "kube-scheduler-calico-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:50.081256  285057 pod_ready.go:40] duration metric: took 1.604028627s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:15:50.123455  285057 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:15:50.125307  285057 out.go:179] * Done! kubectl is now configured to use "calico-593530" cluster and "default" namespace by default
	W1109 14:15:47.790932  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	W1109 14:15:50.290878  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	I1109 14:15:50.816767  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:50.816806  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:50.816812  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:50.816818  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running
	I1109 14:15:50.816823  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:50.816827  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:50.816830  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running
	I1109 14:15:50.816833  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Running
	I1109 14:15:50.816848  292305 retry.go:31] will retry after 2.233599429s: missing components: kube-dns
	I1109 14:15:53.054548  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:53.054579  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:53.054586  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:53.054592  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running
	I1109 14:15:53.054596  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:53.054599  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:53.054603  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running
	I1109 14:15:53.054606  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Running
	I1109 14:15:53.054618  292305 retry.go:31] will retry after 2.802341321s: missing components: kube-dns
	W1109 14:15:52.790292  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	W1109 14:15:54.790546  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	I1109 14:15:56.791262  280419 node_ready.go:49] node "kindnet-593530" is "Ready"
	I1109 14:15:56.791290  280419 node_ready.go:38] duration metric: took 41.003566488s for node "kindnet-593530" to be "Ready" ...
	I1109 14:15:56.791305  280419 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:15:56.791348  280419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:15:56.803143  280419 api_server.go:72] duration metric: took 41.49881417s to wait for apiserver process to appear ...
	I1109 14:15:56.803161  280419 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:15:56.803180  280419 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:15:56.807244  280419 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1109 14:15:56.808161  280419 api_server.go:141] control plane version: v1.34.1
	I1109 14:15:56.808186  280419 api_server.go:131] duration metric: took 5.018019ms to wait for apiserver health ...
	I1109 14:15:56.808196  280419 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:15:56.810997  280419 system_pods.go:59] 8 kube-system pods found
	I1109 14:15:56.811028  280419 system_pods.go:61] "coredns-66bc5c9577-czn4q" [83857dcf-3d3e-4627-98d4-cb3890025b41] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:56.811036  280419 system_pods.go:61] "etcd-kindnet-593530" [d7def6fa-d870-489f-8cde-c91579e96fbc] Running
	I1109 14:15:56.811042  280419 system_pods.go:61] "kindnet-tcnpd" [074c5b01-c76a-4b19-9daf-0297d84e3ddf] Running
	I1109 14:15:56.811047  280419 system_pods.go:61] "kube-apiserver-kindnet-593530" [f351e2ed-6067-4770-a3bf-11f9b08be886] Running
	I1109 14:15:56.811052  280419 system_pods.go:61] "kube-controller-manager-kindnet-593530" [d9d04309-0e1b-43cf-a100-27dd3355a0d1] Running
	I1109 14:15:56.811057  280419 system_pods.go:61] "kube-proxy-2b82p" [ca422cca-ab6a-4ba6-ad64-d930c4d0bc23] Running
	I1109 14:15:56.811063  280419 system_pods.go:61] "kube-scheduler-kindnet-593530" [ac5aa5cc-418c-4036-84f1-a539578a9c6d] Running
	I1109 14:15:56.811070  280419 system_pods.go:61] "storage-provisioner" [4ea696ff-1c98-4a58-b242-3ee802da17bb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:56.811081  280419 system_pods.go:74] duration metric: took 2.878198ms to wait for pod list to return data ...
	I1109 14:15:56.811095  280419 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:15:56.813215  280419 default_sa.go:45] found service account: "default"
	I1109 14:15:56.813235  280419 default_sa.go:55] duration metric: took 2.133116ms for default service account to be created ...
	I1109 14:15:56.813243  280419 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:15:56.817222  280419 system_pods.go:86] 8 kube-system pods found
	I1109 14:15:56.817351  280419 system_pods.go:89] "coredns-66bc5c9577-czn4q" [83857dcf-3d3e-4627-98d4-cb3890025b41] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:56.817362  280419 system_pods.go:89] "etcd-kindnet-593530" [d7def6fa-d870-489f-8cde-c91579e96fbc] Running
	I1109 14:15:56.817374  280419 system_pods.go:89] "kindnet-tcnpd" [074c5b01-c76a-4b19-9daf-0297d84e3ddf] Running
	I1109 14:15:56.817387  280419 system_pods.go:89] "kube-apiserver-kindnet-593530" [f351e2ed-6067-4770-a3bf-11f9b08be886] Running
	I1109 14:15:56.817424  280419 system_pods.go:89] "kube-controller-manager-kindnet-593530" [d9d04309-0e1b-43cf-a100-27dd3355a0d1] Running
	I1109 14:15:56.817472  280419 system_pods.go:89] "kube-proxy-2b82p" [ca422cca-ab6a-4ba6-ad64-d930c4d0bc23] Running
	I1109 14:15:56.817839  280419 system_pods.go:89] "kube-scheduler-kindnet-593530" [ac5aa5cc-418c-4036-84f1-a539578a9c6d] Running
	I1109 14:15:56.817855  280419 system_pods.go:89] "storage-provisioner" [4ea696ff-1c98-4a58-b242-3ee802da17bb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:56.817876  280419 retry.go:31] will retry after 207.927622ms: missing components: kube-dns
	I1109 14:15:57.029609  280419 system_pods.go:86] 8 kube-system pods found
	I1109 14:15:57.029671  280419 system_pods.go:89] "coredns-66bc5c9577-czn4q" [83857dcf-3d3e-4627-98d4-cb3890025b41] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:57.029683  280419 system_pods.go:89] "etcd-kindnet-593530" [d7def6fa-d870-489f-8cde-c91579e96fbc] Running
	I1109 14:15:57.029691  280419 system_pods.go:89] "kindnet-tcnpd" [074c5b01-c76a-4b19-9daf-0297d84e3ddf] Running
	I1109 14:15:57.029698  280419 system_pods.go:89] "kube-apiserver-kindnet-593530" [f351e2ed-6067-4770-a3bf-11f9b08be886] Running
	I1109 14:15:57.029704  280419 system_pods.go:89] "kube-controller-manager-kindnet-593530" [d9d04309-0e1b-43cf-a100-27dd3355a0d1] Running
	I1109 14:15:57.029711  280419 system_pods.go:89] "kube-proxy-2b82p" [ca422cca-ab6a-4ba6-ad64-d930c4d0bc23] Running
	I1109 14:15:57.029723  280419 system_pods.go:89] "kube-scheduler-kindnet-593530" [ac5aa5cc-418c-4036-84f1-a539578a9c6d] Running
	I1109 14:15:57.029730  280419 system_pods.go:89] "storage-provisioner" [4ea696ff-1c98-4a58-b242-3ee802da17bb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:57.029751  280419 retry.go:31] will retry after 296.848925ms: missing components: kube-dns
	I1109 14:15:57.334256  280419 system_pods.go:86] 8 kube-system pods found
	I1109 14:15:57.334288  280419 system_pods.go:89] "coredns-66bc5c9577-czn4q" [83857dcf-3d3e-4627-98d4-cb3890025b41] Running
	I1109 14:15:57.334295  280419 system_pods.go:89] "etcd-kindnet-593530" [d7def6fa-d870-489f-8cde-c91579e96fbc] Running
	I1109 14:15:57.334301  280419 system_pods.go:89] "kindnet-tcnpd" [074c5b01-c76a-4b19-9daf-0297d84e3ddf] Running
	I1109 14:15:57.334306  280419 system_pods.go:89] "kube-apiserver-kindnet-593530" [f351e2ed-6067-4770-a3bf-11f9b08be886] Running
	I1109 14:15:57.334311  280419 system_pods.go:89] "kube-controller-manager-kindnet-593530" [d9d04309-0e1b-43cf-a100-27dd3355a0d1] Running
	I1109 14:15:57.334317  280419 system_pods.go:89] "kube-proxy-2b82p" [ca422cca-ab6a-4ba6-ad64-d930c4d0bc23] Running
	I1109 14:15:57.334322  280419 system_pods.go:89] "kube-scheduler-kindnet-593530" [ac5aa5cc-418c-4036-84f1-a539578a9c6d] Running
	I1109 14:15:57.334330  280419 system_pods.go:89] "storage-provisioner" [4ea696ff-1c98-4a58-b242-3ee802da17bb] Running
	I1109 14:15:57.334340  280419 system_pods.go:126] duration metric: took 521.090338ms to wait for k8s-apps to be running ...
	I1109 14:15:57.334350  280419 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:15:57.334397  280419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:15:57.357449  280419 system_svc.go:56] duration metric: took 23.091164ms WaitForService to wait for kubelet
	I1109 14:15:57.357492  280419 kubeadm.go:587] duration metric: took 42.053154091s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:15:57.357516  280419 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:15:57.361891  280419 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:15:57.361917  280419 node_conditions.go:123] node cpu capacity is 8
	I1109 14:15:57.361929  280419 node_conditions.go:105] duration metric: took 4.407954ms to run NodePressure ...
	I1109 14:15:57.361943  280419 start.go:242] waiting for startup goroutines ...
	I1109 14:15:57.361953  280419 start.go:247] waiting for cluster config update ...
	I1109 14:15:57.361973  280419 start.go:256] writing updated cluster config ...
	I1109 14:15:57.362873  280419 ssh_runner.go:195] Run: rm -f paused
	I1109 14:15:57.368715  280419 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:15:57.374634  280419 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-czn4q" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:57.381365  280419 pod_ready.go:94] pod "coredns-66bc5c9577-czn4q" is "Ready"
	I1109 14:15:57.381397  280419 pod_ready.go:86] duration metric: took 6.715957ms for pod "coredns-66bc5c9577-czn4q" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:57.384316  280419 pod_ready.go:83] waiting for pod "etcd-kindnet-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:57.389676  280419 pod_ready.go:94] pod "etcd-kindnet-593530" is "Ready"
	I1109 14:15:57.389696  280419 pod_ready.go:86] duration metric: took 5.351325ms for pod "etcd-kindnet-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:57.392377  280419 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:57.397107  280419 pod_ready.go:94] pod "kube-apiserver-kindnet-593530" is "Ready"
	I1109 14:15:57.397137  280419 pod_ready.go:86] duration metric: took 4.735759ms for pod "kube-apiserver-kindnet-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:57.400542  280419 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:57.773686  280419 pod_ready.go:94] pod "kube-controller-manager-kindnet-593530" is "Ready"
	I1109 14:15:57.773711  280419 pod_ready.go:86] duration metric: took 373.147745ms for pod "kube-controller-manager-kindnet-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:57.974561  280419 pod_ready.go:83] waiting for pod "kube-proxy-2b82p" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:58.374029  280419 pod_ready.go:94] pod "kube-proxy-2b82p" is "Ready"
	I1109 14:15:58.374055  280419 pod_ready.go:86] duration metric: took 399.468896ms for pod "kube-proxy-2b82p" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:58.575696  280419 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:58.973221  280419 pod_ready.go:94] pod "kube-scheduler-kindnet-593530" is "Ready"
	I1109 14:15:58.973244  280419 pod_ready.go:86] duration metric: took 397.514286ms for pod "kube-scheduler-kindnet-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:58.973255  280419 pod_ready.go:40] duration metric: took 1.604507623s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:15:59.014920  280419 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:15:59.016450  280419 out.go:179] * Done! kubectl is now configured to use "kindnet-593530" cluster and "default" namespace by default
	I1109 14:15:55.860931  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:55.860967  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:55.860974  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:55.860982  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running
	I1109 14:15:55.860988  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:55.860993  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:55.860999  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running
	I1109 14:15:55.861005  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Running
	I1109 14:15:55.861026  292305 retry.go:31] will retry after 2.903100187s: missing components: kube-dns
	I1109 14:15:58.769758  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:58.769787  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:58.769794  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:58.769800  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running
	I1109 14:15:58.769805  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:58.769808  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:58.769813  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running
	I1109 14:15:58.769819  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Running
	I1109 14:15:58.769832  292305 retry.go:31] will retry after 3.837368865s: missing components: kube-dns
	
	
	==> CRI-O <==
	Nov 09 14:15:42 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:42.625773445Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5fecf764-b808-4a25-b719-242f2be036bc name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:15:42 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:42.626835032Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=39db12f4-0596-4c7b-ba79-7ef09cf8d014 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:15:42 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:42.62696559Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:15:42 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:42.631761555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:15:42 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:42.631894367Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6f3736bb7935b25570549f0c390a434cb8263e066f0534f046b20d61a0f1ee4f/merged/etc/passwd: no such file or directory"
	Nov 09 14:15:42 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:42.63192655Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6f3736bb7935b25570549f0c390a434cb8263e066f0534f046b20d61a0f1ee4f/merged/etc/group: no such file or directory"
	Nov 09 14:15:42 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:42.632333896Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:15:42 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:42.662655124Z" level=info msg="Created container 78756eafc8cc650dd8346a21fa319a4f4fd39031ed235b5ff8da5979c38a8ba6: kube-system/storage-provisioner/storage-provisioner" id=39db12f4-0596-4c7b-ba79-7ef09cf8d014 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:15:42 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:42.663216012Z" level=info msg="Starting container: 78756eafc8cc650dd8346a21fa319a4f4fd39031ed235b5ff8da5979c38a8ba6" id=c1e6a567-a7fe-4c74-bf24-d371186fb347 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:15:42 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:42.664954551Z" level=info msg="Started container" PID=1696 containerID=78756eafc8cc650dd8346a21fa319a4f4fd39031ed235b5ff8da5979c38a8ba6 description=kube-system/storage-provisioner/storage-provisioner id=c1e6a567-a7fe-4c74-bf24-d371186fb347 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c32f2df08eb386632b5c5c2b6c7c15e16709f4bf29049d4c9e2fbbbc6ad9051f
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.235570896Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.239710291Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.239734991Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.239750728Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.243250227Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.24327454Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.243292707Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.246829199Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.246853891Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.246872726Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.250251787Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.250274912Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.250296909Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.253540853Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.253559463Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	78756eafc8cc6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   c32f2df08eb38       storage-provisioner                                    kube-system
	f64494efa5bee       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   0c42e1d026796       dashboard-metrics-scraper-6ffb444bf9-jzz6r             kubernetes-dashboard
	86d787fc7b9fc       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   37 seconds ago      Running             kubernetes-dashboard        0                   269fc0ffbc5ee       kubernetes-dashboard-855c9754f9-cfzqd                  kubernetes-dashboard
	db196ab0b527e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           48 seconds ago      Running             coredns                     0                   2a03da47f8c2c       coredns-66bc5c9577-z8lkx                               kube-system
	2aee0c1de134e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   b799dc9d1968e       busybox                                                default
	fc06f175e4a8d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           48 seconds ago      Running             kube-proxy                  0                   bf3ef13feda79       kube-proxy-n95wb                                       kube-system
	ebf68a39b2ef3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   08e623a4370fc       kindnet-fdxsl                                          kube-system
	4b5a253a8c077       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   c32f2df08eb38       storage-provisioner                                    kube-system
	7ab8f2cac821a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           51 seconds ago      Running             kube-scheduler              0                   82fe3eb826f2b       kube-scheduler-default-k8s-diff-port-326524            kube-system
	fbe03639cf3cf       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           51 seconds ago      Running             kube-apiserver              0                   25f8340ee5b6d       kube-apiserver-default-k8s-diff-port-326524            kube-system
	5c183e798015e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           51 seconds ago      Running             etcd                        0                   7605e01263310       etcd-default-k8s-diff-port-326524                      kube-system
	837343655ca08       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           51 seconds ago      Running             kube-controller-manager     0                   fd96abbe6f29b       kube-controller-manager-default-k8s-diff-port-326524   kube-system
	
	
	==> coredns [db196ab0b527eabaa5ca6448d00c0929a6ddeb5c052739081cb73ceb539b821d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55685 - 43274 "HINFO IN 7174974885144464276.1359323117684649961. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.088692886s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-326524
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-326524
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=default-k8s-diff-port-326524
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_13_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:13:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-326524
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:15:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:15:52 +0000   Sun, 09 Nov 2025 14:13:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:15:52 +0000   Sun, 09 Nov 2025 14:13:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:15:52 +0000   Sun, 09 Nov 2025 14:13:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:15:52 +0000   Sun, 09 Nov 2025 14:14:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-326524
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                d901abab-4a5c-4bab-8d2e-5eebe721a5ed
	  Boot ID:                    f61b57f8-a461-4dd6-b921-37a11bd88f0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-z8lkx                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m14s
	  kube-system                 etcd-default-k8s-diff-port-326524                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m20s
	  kube-system                 kindnet-fdxsl                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m15s
	  kube-system                 kube-apiserver-default-k8s-diff-port-326524             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-326524    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-proxy-n95wb                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-scheduler-default-k8s-diff-port-326524             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-jzz6r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-cfzqd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m13s                  kube-proxy       
	  Normal  Starting                 48s                    kube-proxy       
	  Normal  Starting                 2m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m25s (x8 over 2m25s)  kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m25s (x8 over 2m25s)  kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m25s (x8 over 2m25s)  kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m20s                  kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m20s                  kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m20s                  kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m20s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m16s                  node-controller  Node default-k8s-diff-port-326524 event: Registered Node default-k8s-diff-port-326524 in Controller
	  Normal  NodeReady                93s                    kubelet          Node default-k8s-diff-port-326524 status is now: NodeReady
	  Normal  Starting                 52s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)      kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)      kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)      kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                    node-controller  Node default-k8s-diff-port-326524 event: Registered Node default-k8s-diff-port-326524 in Controller
	
	
	==> dmesg <==
	[  +6.571631] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 9 13:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.019189] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023897] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +2.047759] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +4.031604] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +8.447123] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[ +16.382306] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 13:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 14:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 4f e9 4b 2a 15 08 06
	
	
	==> etcd [5c183e798015e8d24e58b5d9a3615af03e163ce2046d50f89cd8270f5eed3b9f] <==
	{"level":"warn","ts":"2025-11-09T14:15:10.642072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.649946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.657601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.664976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.672367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.679399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.686861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.693895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.706919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.714508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.722040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.783595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45682","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T14:15:14.039246Z","caller":"traceutil/trace.go:172","msg":"trace[606128908] linearizableReadLoop","detail":"{readStateIndex:609; appliedIndex:609; }","duration":"112.114153ms","start":"2025-11-09T14:15:13.927109Z","end":"2025-11-09T14:15:14.039224Z","steps":["trace[606128908] 'read index received'  (duration: 112.107953ms)","trace[606128908] 'applied index is now lower than readState.Index'  (duration: 5.403µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:15:14.179864Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"252.730761ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/validatingadmissionpolicy-status-controller\" limit:1 ","response":"range_response_count:1 size:252"}
	{"level":"info","ts":"2025-11-09T14:15:14.179959Z","caller":"traceutil/trace.go:172","msg":"trace[834412207] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/validatingadmissionpolicy-status-controller; range_end:; response_count:1; response_revision:574; }","duration":"252.838202ms","start":"2025-11-09T14:15:13.927101Z","end":"2025-11-09T14:15:14.179939Z","steps":["trace[834412207] 'agreement among raft nodes before linearized reading'  (duration: 112.211975ms)","trace[834412207] 'range keys from in-memory index tree'  (duration: 140.427715ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:15:14.180410Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.619292ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596946199590411 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577-z8lkx.18765c3b487f24ee\" mod_revision:574 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577-z8lkx.18765c3b487f24ee\" value_size:714 lease:499224909344814214 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-66bc5c9577-z8lkx.18765c3b487f24ee\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-09T14:15:14.180519Z","caller":"traceutil/trace.go:172","msg":"trace[2023832512] linearizableReadLoop","detail":"{readStateIndex:610; appliedIndex:609; }","duration":"131.351523ms","start":"2025-11-09T14:15:14.049153Z","end":"2025-11-09T14:15:14.180505Z","steps":["trace[2023832512] 'read index received'  (duration: 30.251µs)","trace[2023832512] 'applied index is now lower than readState.Index'  (duration: 131.319864ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:15:14.180671Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.506651ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-z8lkx\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-11-09T14:15:14.180703Z","caller":"traceutil/trace.go:172","msg":"trace[1938066004] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-z8lkx; range_end:; response_count:1; response_revision:575; }","duration":"131.545699ms","start":"2025-11-09T14:15:14.049148Z","end":"2025-11-09T14:15:14.180694Z","steps":["trace[1938066004] 'agreement among raft nodes before linearized reading'  (duration: 131.402245ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:15:14.180876Z","caller":"traceutil/trace.go:172","msg":"trace[1216754960] transaction","detail":"{read_only:false; response_revision:575; number_of_response:1; }","duration":"265.206218ms","start":"2025-11-09T14:15:13.915657Z","end":"2025-11-09T14:15:14.180864Z","steps":["trace[1216754960] 'process raft request'  (duration: 123.591228ms)","trace[1216754960] 'compare'  (duration: 140.539821ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:15:15.152764Z","caller":"traceutil/trace.go:172","msg":"trace[1821183373] linearizableReadLoop","detail":"{readStateIndex:612; appliedIndex:612; }","duration":"104.65581ms","start":"2025-11-09T14:15:15.048085Z","end":"2025-11-09T14:15:15.152741Z","steps":["trace[1821183373] 'read index received'  (duration: 104.635619ms)","trace[1821183373] 'applied index is now lower than readState.Index'  (duration: 19.405µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:15:15.152935Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.829539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-z8lkx\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-11-09T14:15:15.152935Z","caller":"traceutil/trace.go:172","msg":"trace[1904282794] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"120.456086ms","start":"2025-11-09T14:15:15.032457Z","end":"2025-11-09T14:15:15.152913Z","steps":["trace[1904282794] 'process raft request'  (duration: 120.353297ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:15:15.153333Z","caller":"traceutil/trace.go:172","msg":"trace[71154788] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-z8lkx; range_end:; response_count:1; response_revision:577; }","duration":"104.875305ms","start":"2025-11-09T14:15:15.048077Z","end":"2025-11-09T14:15:15.152953Z","steps":["trace[71154788] 'agreement among raft nodes before linearized reading'  (duration: 104.751804ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:15:15.162866Z","caller":"traceutil/trace.go:172","msg":"trace[900107361] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"130.318311ms","start":"2025-11-09T14:15:15.032527Z","end":"2025-11-09T14:15:15.162845Z","steps":["trace[900107361] 'process raft request'  (duration: 125.669038ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:16:00 up 58 min,  0 user,  load average: 6.07, 4.49, 2.63
	Linux default-k8s-diff-port-326524 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ebf68a39b2ef31de8b38938ff0fda338ca0858e9fd7cc54035465ac606412dc9] <==
	I1109 14:15:12.028890       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:15:12.029142       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1109 14:15:12.029298       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:15:12.029316       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:15:12.029338       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:15:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:15:12.229243       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:15:12.229271       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:15:12.229283       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:15:12.229548       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1109 14:15:42.229965       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1109 14:15:42.229971       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1109 14:15:42.229965       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1109 14:15:42.322582       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1109 14:15:43.529425       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:15:43.529450       1 metrics.go:72] Registering metrics
	I1109 14:15:43.529511       1 controller.go:711] "Syncing nftables rules"
	I1109 14:15:52.235269       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:15:52.235318       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fbe03639cf3cf12216d6d619a7ac216d6482e7d7722f4c3ff0c7021ea98e7f30] <==
	I1109 14:15:11.355833       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 14:15:11.349554       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1109 14:15:11.352743       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1109 14:15:11.368090       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1109 14:15:11.374113       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1109 14:15:11.374202       1 policy_source.go:240] refreshing policies
	I1109 14:15:11.376887       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1109 14:15:11.377058       1 aggregator.go:171] initial CRD sync complete...
	I1109 14:15:11.377070       1 autoregister_controller.go:144] Starting autoregister controller
	I1109 14:15:11.377078       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:15:11.377083       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:15:11.382622       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:15:11.388465       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1109 14:15:11.390554       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:15:11.476011       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:15:11.733195       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:15:11.762514       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:15:11.789729       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:15:11.803886       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:15:11.883088       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.49.13"}
	I1109 14:15:11.899826       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.151.61"}
	I1109 14:15:12.256104       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:15:15.031969       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:15:15.165820       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:15:15.238072       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [837343655ca08ffad86a95fba8e051cfacdce4058b4c6aebfef423a9a95ad170] <==
	I1109 14:15:14.646038       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 14:15:14.678893       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1109 14:15:14.678930       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1109 14:15:14.678942       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 14:15:14.678974       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 14:15:14.679193       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1109 14:15:14.679319       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1109 14:15:14.679331       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1109 14:15:14.679366       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1109 14:15:14.679430       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 14:15:14.679514       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-326524"
	I1109 14:15:14.679550       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1109 14:15:14.679667       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1109 14:15:14.681998       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1109 14:15:14.685218       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 14:15:14.685279       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1109 14:15:14.687672       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 14:15:14.695924       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:15:14.695945       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:15:14.705241       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1109 14:15:14.707548       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:15:14.708615       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1109 14:15:14.711932       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:15:14.713054       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 14:15:14.715283       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [fc06f175e4a8df21959410c9b874ceb5942160e55f3c77acdd8326cb0be2a478] <==
	I1109 14:15:11.932621       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:15:12.001861       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:15:12.102412       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:15:12.102440       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1109 14:15:12.102520       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:15:12.122012       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:15:12.122074       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:15:12.127784       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:15:12.128148       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:15:12.128183       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:15:12.134115       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:15:12.134138       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:15:12.134170       1 config.go:200] "Starting service config controller"
	I1109 14:15:12.134175       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:15:12.134192       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:15:12.134202       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:15:12.134328       1 config.go:309] "Starting node config controller"
	I1109 14:15:12.134366       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:15:12.134375       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:15:12.234318       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:15:12.234332       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:15:12.234313       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7ab8f2cac821afdbf2b15cac151b0b2e02e8e8d57071e97a867dc8ec28d4c7f2] <==
	I1109 14:15:09.886025       1 serving.go:386] Generated self-signed cert in-memory
	I1109 14:15:11.343778       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 14:15:11.343808       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:15:11.352040       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:15:11.352189       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:15:11.352725       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:15:11.352157       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1109 14:15:11.352838       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1109 14:15:11.352205       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1109 14:15:11.354573       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1109 14:15:11.352224       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 14:15:11.452992       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1109 14:15:11.453136       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:15:11.455248       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:15:15 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:15.399107     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/abdce049-274b-4d8e-b0bb-1db69a7fd265-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-cfzqd\" (UID: \"abdce049-274b-4d8e-b0bb-1db69a7fd265\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cfzqd"
	Nov 09 14:15:15 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:15.399346     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2vgw\" (UniqueName: \"kubernetes.io/projected/20ebc0a6-2eb1-4988-b1ab-367cac579079-kube-api-access-n2vgw\") pod \"dashboard-metrics-scraper-6ffb444bf9-jzz6r\" (UID: \"20ebc0a6-2eb1-4988-b1ab-367cac579079\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jzz6r"
	Nov 09 14:15:15 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:15.399619     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/20ebc0a6-2eb1-4988-b1ab-367cac579079-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-jzz6r\" (UID: \"20ebc0a6-2eb1-4988-b1ab-367cac579079\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jzz6r"
	Nov 09 14:15:18 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:18.553832     719 scope.go:117] "RemoveContainer" containerID="454cf4ccce17381fca3f7fb640a151ba3cc8a6ca75233f3ad2c9f60b447a34e9"
	Nov 09 14:15:19 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:19.559155     719 scope.go:117] "RemoveContainer" containerID="454cf4ccce17381fca3f7fb640a151ba3cc8a6ca75233f3ad2c9f60b447a34e9"
	Nov 09 14:15:19 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:19.559442     719 scope.go:117] "RemoveContainer" containerID="fc9ed13ab7b43b93301b2d4c22c2d13822d9946bcc344d69be7deb082e9aabfc"
	Nov 09 14:15:19 default-k8s-diff-port-326524 kubelet[719]: E1109 14:15:19.559590     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jzz6r_kubernetes-dashboard(20ebc0a6-2eb1-4988-b1ab-367cac579079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jzz6r" podUID="20ebc0a6-2eb1-4988-b1ab-367cac579079"
	Nov 09 14:15:20 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:20.563411     719 scope.go:117] "RemoveContainer" containerID="fc9ed13ab7b43b93301b2d4c22c2d13822d9946bcc344d69be7deb082e9aabfc"
	Nov 09 14:15:20 default-k8s-diff-port-326524 kubelet[719]: E1109 14:15:20.563626     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jzz6r_kubernetes-dashboard(20ebc0a6-2eb1-4988-b1ab-367cac579079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jzz6r" podUID="20ebc0a6-2eb1-4988-b1ab-367cac579079"
	Nov 09 14:15:22 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:22.096476     719 scope.go:117] "RemoveContainer" containerID="fc9ed13ab7b43b93301b2d4c22c2d13822d9946bcc344d69be7deb082e9aabfc"
	Nov 09 14:15:22 default-k8s-diff-port-326524 kubelet[719]: E1109 14:15:22.096690     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jzz6r_kubernetes-dashboard(20ebc0a6-2eb1-4988-b1ab-367cac579079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jzz6r" podUID="20ebc0a6-2eb1-4988-b1ab-367cac579079"
	Nov 09 14:15:23 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:23.581739     719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cfzqd" podStartSLOduration=1.332306976 podStartE2EDuration="8.581716639s" podCreationTimestamp="2025-11-09 14:15:15 +0000 UTC" firstStartedPulling="2025-11-09 14:15:15.689194853 +0000 UTC m=+7.307046992" lastFinishedPulling="2025-11-09 14:15:22.938604505 +0000 UTC m=+14.556456655" observedRunningTime="2025-11-09 14:15:23.581593208 +0000 UTC m=+15.199445400" watchObservedRunningTime="2025-11-09 14:15:23.581716639 +0000 UTC m=+15.199568796"
	Nov 09 14:15:37 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:37.484504     719 scope.go:117] "RemoveContainer" containerID="fc9ed13ab7b43b93301b2d4c22c2d13822d9946bcc344d69be7deb082e9aabfc"
	Nov 09 14:15:37 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:37.608228     719 scope.go:117] "RemoveContainer" containerID="fc9ed13ab7b43b93301b2d4c22c2d13822d9946bcc344d69be7deb082e9aabfc"
	Nov 09 14:15:37 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:37.608453     719 scope.go:117] "RemoveContainer" containerID="f64494efa5bee134a9a5d0bbcdb057a51a842044c2230e1f2ea2ef9aa0b9654f"
	Nov 09 14:15:37 default-k8s-diff-port-326524 kubelet[719]: E1109 14:15:37.608658     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jzz6r_kubernetes-dashboard(20ebc0a6-2eb1-4988-b1ab-367cac579079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jzz6r" podUID="20ebc0a6-2eb1-4988-b1ab-367cac579079"
	Nov 09 14:15:42 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:42.097462     719 scope.go:117] "RemoveContainer" containerID="f64494efa5bee134a9a5d0bbcdb057a51a842044c2230e1f2ea2ef9aa0b9654f"
	Nov 09 14:15:42 default-k8s-diff-port-326524 kubelet[719]: E1109 14:15:42.097700     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jzz6r_kubernetes-dashboard(20ebc0a6-2eb1-4988-b1ab-367cac579079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jzz6r" podUID="20ebc0a6-2eb1-4988-b1ab-367cac579079"
	Nov 09 14:15:42 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:42.624444     719 scope.go:117] "RemoveContainer" containerID="4b5a253a8c0774e804e83d03fcc6cdd58c8b3baf291ecfb31bc3eb73c12fa77d"
	Nov 09 14:15:55 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:55.484830     719 scope.go:117] "RemoveContainer" containerID="f64494efa5bee134a9a5d0bbcdb057a51a842044c2230e1f2ea2ef9aa0b9654f"
	Nov 09 14:15:55 default-k8s-diff-port-326524 kubelet[719]: E1109 14:15:55.485074     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jzz6r_kubernetes-dashboard(20ebc0a6-2eb1-4988-b1ab-367cac579079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jzz6r" podUID="20ebc0a6-2eb1-4988-b1ab-367cac579079"
	Nov 09 14:15:57 default-k8s-diff-port-326524 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:15:57 default-k8s-diff-port-326524 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:15:57 default-k8s-diff-port-326524 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 09 14:15:57 default-k8s-diff-port-326524 systemd[1]: kubelet.service: Consumed 1.499s CPU time.
	
	
	==> kubernetes-dashboard [86d787fc7b9fc4076e72a30dca4ee7586b81d535a1d2635a796c6746370cdcd2] <==
	2025/11/09 14:15:22 Using namespace: kubernetes-dashboard
	2025/11/09 14:15:22 Using in-cluster config to connect to apiserver
	2025/11/09 14:15:22 Using secret token for csrf signing
	2025/11/09 14:15:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/09 14:15:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/09 14:15:22 Successful initial request to the apiserver, version: v1.34.1
	2025/11/09 14:15:22 Generating JWE encryption key
	2025/11/09 14:15:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/09 14:15:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/09 14:15:23 Initializing JWE encryption key from synchronized object
	2025/11/09 14:15:23 Creating in-cluster Sidecar client
	2025/11/09 14:15:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:15:23 Serving insecurely on HTTP port: 9090
	2025/11/09 14:15:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:15:22 Starting overwatch
	
	
	==> storage-provisioner [4b5a253a8c0774e804e83d03fcc6cdd58c8b3baf291ecfb31bc3eb73c12fa77d] <==
	I1109 14:15:11.898738       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1109 14:15:41.902109       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [78756eafc8cc650dd8346a21fa319a4f4fd39031ed235b5ff8da5979c38a8ba6] <==
	I1109 14:15:42.676256       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:15:42.683226       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:15:42.683261       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 14:15:42.684921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:15:46.139973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:15:50.400735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:15:53.999239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:15:57.053556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:16:00.075809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:16:00.082031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:16:00.082204       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:16:00.082300       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9456f7ff-bf23-4b3e-a78e-e1e46b0b9684", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-326524_daf419db-493f-4dac-a62a-bada0890b589 became leader
	I1109 14:16:00.082364       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-326524_daf419db-493f-4dac-a62a-bada0890b589!
	W1109 14:16:00.084682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:16:00.087854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:16:00.182581       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-326524_daf419db-493f-4dac-a62a-bada0890b589!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-326524 -n default-k8s-diff-port-326524
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-326524 -n default-k8s-diff-port-326524: exit status 2 (321.231536ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-326524 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-326524
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-326524:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9",
	        "Created": "2025-11-09T14:13:22.347253658Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 287963,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:15:02.008420696Z",
	            "FinishedAt": "2025-11-09T14:14:58.821452172Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9/hostname",
	        "HostsPath": "/var/lib/docker/containers/4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9/hosts",
	        "LogPath": "/var/lib/docker/containers/4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9/4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9-json.log",
	        "Name": "/default-k8s-diff-port-326524",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-326524:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-326524",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4d5e864b1f2e83847fe47fc8bf391baaaca9b2dfabc52799efae416bd4963be9",
	                "LowerDir": "/var/lib/docker/overlay2/6090cd65b9b7b71056ab21b51f3d0835e7a09039168090d26312aabaa0dba784-init/diff:/var/lib/docker/overlay2/d0c6d4b4ffbfb6af64ec3408030fc54111fe30d500088a5ae947d598cfc72f0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6090cd65b9b7b71056ab21b51f3d0835e7a09039168090d26312aabaa0dba784/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6090cd65b9b7b71056ab21b51f3d0835e7a09039168090d26312aabaa0dba784/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6090cd65b9b7b71056ab21b51f3d0835e7a09039168090d26312aabaa0dba784/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-326524",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-326524/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-326524",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-326524",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-326524",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "628f5f34a67d2308d8573aea56f7b31953d9374a115a545d55b4d3066ed1f45d",
	            "SandboxKey": "/var/run/docker/netns/628f5f34a67d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-326524": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:6f:0c:a7:3c:a3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1418d8b0aecfeebbb964747ce9f2239c14745f39f121eb76b984b7589e5562c5",
	                    "EndpointID": "077158bde996f64749ced02646b419379755061a9a152ab723aa6cc72d97cf06",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-326524",
	                        "4d5e864b1f2e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-326524 -n default-k8s-diff-port-326524
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-326524 -n default-k8s-diff-port-326524: exit status 2 (304.846247ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-326524 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-326524 logs -n 25: (1.155766358s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-593530 sudo systemctl status docker --all --full --no-pager                                                                                                      │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │                     │
	│ ssh     │ -p auto-593530 sudo systemctl cat docker --no-pager                                                                                                                      │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo cat /etc/docker/daemon.json                                                                                                                          │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │                     │
	│ ssh     │ -p auto-593530 sudo docker system info                                                                                                                                   │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-326524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ start   │ -p default-k8s-diff-port-326524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo systemctl status cri-docker --all --full --no-pager                                                                                                  │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │                     │
	│ ssh     │ -p auto-593530 sudo systemctl cat cri-docker --no-pager                                                                                                                  │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                             │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │                     │
	│ ssh     │ -p auto-593530 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                       │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo cri-dockerd --version                                                                                                                                │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo systemctl status containerd --all --full --no-pager                                                                                                  │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │                     │
	│ ssh     │ -p auto-593530 sudo systemctl cat containerd --no-pager                                                                                                                  │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo cat /lib/systemd/system/containerd.service                                                                                                           │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo cat /etc/containerd/config.toml                                                                                                                      │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo containerd config dump                                                                                                                               │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo systemctl status crio --all --full --no-pager                                                                                                        │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo systemctl cat crio --no-pager                                                                                                                        │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                              │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ ssh     │ -p auto-593530 sudo crio config                                                                                                                                          │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ delete  │ -p auto-593530                                                                                                                                                           │ auto-593530                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ start   │ -p custom-flannel-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio       │ custom-flannel-593530        │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │                     │
	│ ssh     │ -p calico-593530 pgrep -a kubelet                                                                                                                                        │ calico-593530                │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ image   │ default-k8s-diff-port-326524 image list --format=json                                                                                                                    │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │ 09 Nov 25 14:15 UTC │
	│ pause   │ -p default-k8s-diff-port-326524 --alsologtostderr -v=1                                                                                                                   │ default-k8s-diff-port-326524 │ jenkins │ v1.37.0 │ 09 Nov 25 14:15 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:15:09
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:15:09.248163  292305 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:15:09.248321  292305 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:15:09.248332  292305 out.go:374] Setting ErrFile to fd 2...
	I1109 14:15:09.248338  292305 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:15:09.248568  292305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:15:09.249093  292305 out.go:368] Setting JSON to false
	I1109 14:15:09.250527  292305 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3459,"bootTime":1762694250,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:15:09.250615  292305 start.go:143] virtualization: kvm guest
	I1109 14:15:09.252574  292305 out.go:179] * [custom-flannel-593530] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:15:09.254010  292305 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:15:09.254018  292305 notify.go:221] Checking for updates...
	I1109 14:15:09.256418  292305 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:15:09.258018  292305 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:15:09.259118  292305 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 14:15:09.260250  292305 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:15:09.261303  292305 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:15:07.998325  287405 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-326524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:15:08.018527  287405 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1109 14:15:08.023949  287405 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:15:08.037050  287405 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-326524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-326524 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:15:08.037213  287405 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:15:08.037278  287405 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:15:08.076018  287405 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:15:08.076038  287405 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:15:08.076087  287405 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:15:08.108788  287405 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:15:08.108812  287405 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:15:08.108821  287405 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1109 14:15:08.108942  287405 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-326524 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-326524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:15:08.109019  287405 ssh_runner.go:195] Run: crio config
	I1109 14:15:08.178530  287405 cni.go:84] Creating CNI manager for ""
	I1109 14:15:08.178555  287405 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:15:08.178572  287405 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:15:08.178597  287405 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-326524 NodeName:default-k8s-diff-port-326524 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:15:08.178780  287405 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-326524"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:15:08.178859  287405 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:15:08.188730  287405 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:15:08.188785  287405 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:15:08.196529  287405 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1109 14:15:08.212596  287405 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:15:08.228685  287405 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1109 14:15:08.244850  287405 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:15:08.249630  287405 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:15:08.262257  287405 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:15:08.355912  287405 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:15:08.379875  287405 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524 for IP: 192.168.85.2
	I1109 14:15:08.379900  287405 certs.go:195] generating shared ca certs ...
	I1109 14:15:08.379921  287405 certs.go:227] acquiring lock for ca certs: {Name:mk1fbc7fb5aaf0e87090d75145f91c095ef07289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:08.380082  287405 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key
	I1109 14:15:08.380135  287405 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key
	I1109 14:15:08.380146  287405 certs.go:257] generating profile certs ...
	I1109 14:15:08.380246  287405 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/client.key
	I1109 14:15:08.380319  287405 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.key.cfdee782
	I1109 14:15:08.380365  287405 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.key
	I1109 14:15:08.380496  287405 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem (1338 bytes)
	W1109 14:15:08.380534  287405 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365_empty.pem, impossibly tiny 0 bytes
	I1109 14:15:08.380548  287405 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:15:08.380579  287405 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 14:15:08.380615  287405 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:15:08.380663  287405 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 14:15:08.380718  287405 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:15:08.381502  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:15:08.402440  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:15:08.436548  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:15:08.463069  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:15:08.501671  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:15:08.519946  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:15:08.537881  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:15:08.553757  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/default-k8s-diff-port-326524/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:15:08.570148  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:15:08.588077  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem --> /usr/share/ca-certificates/9365.pem (1338 bytes)
	I1109 14:15:08.606687  287405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /usr/share/ca-certificates/93652.pem (1708 bytes)
	I1109 14:15:08.625632  287405 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:15:08.640330  287405 ssh_runner.go:195] Run: openssl version
	I1109 14:15:08.652070  287405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9365.pem && ln -fs /usr/share/ca-certificates/9365.pem /etc/ssl/certs/9365.pem"
	I1109 14:15:08.664725  287405 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9365.pem
	I1109 14:15:08.669445  287405 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:34 /usr/share/ca-certificates/9365.pem
	I1109 14:15:08.669499  287405 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9365.pem
	I1109 14:15:08.719098  287405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9365.pem /etc/ssl/certs/51391683.0"
	I1109 14:15:08.727517  287405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93652.pem && ln -fs /usr/share/ca-certificates/93652.pem /etc/ssl/certs/93652.pem"
	I1109 14:15:08.736745  287405 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93652.pem
	I1109 14:15:08.740387  287405 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:34 /usr/share/ca-certificates/93652.pem
	I1109 14:15:08.740441  287405 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93652.pem
	I1109 14:15:08.780445  287405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93652.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:15:08.788558  287405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:15:08.797526  287405 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:15:08.801330  287405 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:15:08.801399  287405 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:15:08.847249  287405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:15:08.856216  287405 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:15:08.860420  287405 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:15:08.924429  287405 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:15:08.984219  287405 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:15:09.062290  287405 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:15:09.125578  287405 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:15:09.186578  287405 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:15:09.249713  287405 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-326524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-326524 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:15:09.249823  287405 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:15:09.249875  287405 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:15:09.291281  287405 cri.go:89] found id: "7ab8f2cac821afdbf2b15cac151b0b2e02e8e8d57071e97a867dc8ec28d4c7f2"
	I1109 14:15:09.291304  287405 cri.go:89] found id: "fbe03639cf3cf12216d6d619a7ac216d6482e7d7722f4c3ff0c7021ea98e7f30"
	I1109 14:15:09.291311  287405 cri.go:89] found id: "5c183e798015e8d24e58b5d9a3615af03e163ce2046d50f89cd8270f5eed3b9f"
	I1109 14:15:09.291322  287405 cri.go:89] found id: "837343655ca08ffad86a95fba8e051cfacdce4058b4c6aebfef423a9a95ad170"
	I1109 14:15:09.291327  287405 cri.go:89] found id: ""
	I1109 14:15:09.291369  287405 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 14:15:09.306193  287405 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:15:09Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:15:09.306276  287405 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:15:09.316622  287405 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:15:09.316755  287405 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:15:09.316805  287405 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:15:09.326991  287405 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:15:09.327457  287405 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-326524" does not appear in /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:15:09.327561  287405 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-5854/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-326524" cluster setting kubeconfig missing "default-k8s-diff-port-326524" context setting]
	I1109 14:15:09.328023  287405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:09.331761  287405 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:15:09.342472  287405 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1109 14:15:09.342495  287405 kubeadm.go:602] duration metric: took 25.723126ms to restartPrimaryControlPlane
	I1109 14:15:09.342505  287405 kubeadm.go:403] duration metric: took 92.801476ms to StartCluster
	I1109 14:15:09.342521  287405 settings.go:142] acquiring lock: {Name:mk4e77b290eba64589404bd2a5f48c72505ab262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:09.342570  287405 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:15:09.343328  287405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:09.343916  287405 config.go:182] Loaded profile config "default-k8s-diff-port-326524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:15:09.344014  287405 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:15:09.344157  287405 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:15:09.344254  287405 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-326524"
	I1109 14:15:09.344274  287405 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-326524"
	W1109 14:15:09.344282  287405 addons.go:248] addon storage-provisioner should already be in state true
	I1109 14:15:09.344307  287405 host.go:66] Checking if "default-k8s-diff-port-326524" exists ...
	I1109 14:15:09.344566  287405 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-326524"
	I1109 14:15:09.344603  287405 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-326524"
	I1109 14:15:09.344705  287405 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-326524"
	I1109 14:15:09.344849  287405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-326524 --format={{.State.Status}}
	I1109 14:15:09.345147  287405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-326524 --format={{.State.Status}}
	I1109 14:15:09.344727  287405 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-326524"
	W1109 14:15:09.345753  287405 addons.go:248] addon dashboard should already be in state true
	I1109 14:15:09.345784  287405 out.go:179] * Verifying Kubernetes components...
	I1109 14:15:09.345789  287405 host.go:66] Checking if "default-k8s-diff-port-326524" exists ...
	I1109 14:15:09.346259  287405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-326524 --format={{.State.Status}}
	I1109 14:15:09.347004  287405 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:15:09.379753  287405 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-326524"
	W1109 14:15:09.379777  287405 addons.go:248] addon default-storageclass should already be in state true
	I1109 14:15:09.379804  287405 host.go:66] Checking if "default-k8s-diff-port-326524" exists ...
	I1109 14:15:09.380240  287405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-326524 --format={{.State.Status}}
	I1109 14:15:09.383272  287405 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1109 14:15:09.384712  287405 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1109 14:15:09.385962  287405 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1109 14:15:09.385982  287405 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1109 14:15:09.386037  287405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:15:09.387682  287405 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:15:09.263052  292305 config.go:182] Loaded profile config "calico-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:15:09.263200  292305 config.go:182] Loaded profile config "default-k8s-diff-port-326524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:15:09.263315  292305 config.go:182] Loaded profile config "kindnet-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:15:09.263425  292305 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:15:09.297076  292305 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 14:15:09.297210  292305 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:15:09.414574  292305 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-09 14:15:09.383822674 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:15:09.415141  292305 docker.go:319] overlay module found
	I1109 14:15:09.419152  292305 out.go:179] * Using the docker driver based on user configuration
	I1109 14:15:09.421492  292305 start.go:309] selected driver: docker
	I1109 14:15:09.421505  292305 start.go:930] validating driver "docker" against <nil>
	I1109 14:15:09.421519  292305 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:15:09.422328  292305 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:15:09.527729  292305 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-09 14:15:09.513356284 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:15:09.527946  292305 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 14:15:09.528222  292305 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:15:09.529728  292305 out.go:179] * Using Docker driver with root privileges
	I1109 14:15:09.530718  292305 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1109 14:15:09.530764  292305 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1109 14:15:09.530851  292305 start.go:353] cluster config:
	{Name:custom-flannel-593530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-593530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:15:09.532128  292305 out.go:179] * Starting "custom-flannel-593530" primary control-plane node in "custom-flannel-593530" cluster
	I1109 14:15:09.533083  292305 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:15:09.534234  292305 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:15:09.535313  292305 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:15:09.535341  292305 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 14:15:09.535349  292305 cache.go:65] Caching tarball of preloaded images
	I1109 14:15:09.535436  292305 preload.go:238] Found /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 14:15:09.535366  292305 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:15:09.535449  292305 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:15:09.535556  292305 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/config.json ...
	I1109 14:15:09.535588  292305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/config.json: {Name:mkbad36af8dabb255f57147eb5cb60362f4e098d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:09.562797  292305 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:15:09.562819  292305 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:15:09.562837  292305 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:15:09.562867  292305 start.go:360] acquireMachinesLock for custom-flannel-593530: {Name:mk5f212c6ccd0d4ce7db5d28c9e6cf64be85fa38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:15:09.562976  292305 start.go:364] duration metric: took 91.057µs to acquireMachinesLock for "custom-flannel-593530"
	I1109 14:15:09.563001  292305 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-593530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-593530 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:15:09.563084  292305 start.go:125] createHost starting for "" (driver="docker")
	I1109 14:15:09.613006  280419 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 14:15:09.613082  280419 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:15:09.613213  280419 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 14:15:09.613292  280419 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1109 14:15:09.613344  280419 kubeadm.go:319] OS: Linux
	I1109 14:15:09.613411  280419 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 14:15:09.613482  280419 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 14:15:09.613554  280419 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 14:15:09.613624  280419 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 14:15:09.614617  280419 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 14:15:09.614745  280419 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 14:15:09.614819  280419 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 14:15:09.614904  280419 kubeadm.go:319] CGROUPS_IO: enabled
	I1109 14:15:09.615012  280419 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:15:09.615151  280419 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:15:09.615271  280419 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 14:15:09.615367  280419 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:15:09.617112  280419 out.go:252]   - Generating certificates and keys ...
	I1109 14:15:09.617211  280419 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:15:09.617322  280419 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:15:09.617413  280419 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 14:15:09.617488  280419 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 14:15:09.617572  280419 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 14:15:09.617661  280419 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 14:15:09.617733  280419 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 14:15:09.617875  280419 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-593530 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1109 14:15:09.617947  280419 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 14:15:09.618094  280419 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-593530 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1109 14:15:09.618177  280419 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 14:15:09.618254  280419 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 14:15:09.618315  280419 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 14:15:09.618400  280419 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:15:09.618460  280419 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:15:09.618531  280419 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 14:15:09.618598  280419 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:15:09.618706  280419 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:15:09.618780  280419 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:15:09.618875  280419 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:15:09.618955  280419 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:15:09.620172  280419 out.go:252]   - Booting up control plane ...
	I1109 14:15:09.620277  280419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:15:09.620376  280419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:15:09.620455  280419 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:15:09.620586  280419 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:15:09.620726  280419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 14:15:09.620858  280419 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 14:15:09.620963  280419 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:15:09.621009  280419 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:15:09.621179  280419 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 14:15:09.621302  280419 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 14:15:09.621370  280419 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.003113792s
	I1109 14:15:09.621480  280419 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 14:15:09.621577  280419 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1109 14:15:09.621736  280419 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 14:15:09.621836  280419 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 14:15:09.621931  280419 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.067158098s
	I1109 14:15:09.622019  280419 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.468471515s
	I1109 14:15:09.622101  280419 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001957862s
	I1109 14:15:09.622234  280419 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 14:15:09.622380  280419 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 14:15:09.622449  280419 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 14:15:09.622795  280419 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-593530 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 14:15:09.622946  280419 kubeadm.go:319] [bootstrap-token] Using token: dy2agk.tr0oebul2kwwo3mm
	I1109 14:15:09.626667  280419 out.go:252]   - Configuring RBAC rules ...
	I1109 14:15:09.626905  280419 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 14:15:09.627128  280419 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 14:15:09.627309  280419 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 14:15:09.627470  280419 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 14:15:09.627620  280419 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 14:15:09.627745  280419 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 14:15:09.627889  280419 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 14:15:09.627945  280419 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 14:15:09.628008  280419 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 14:15:09.628014  280419 kubeadm.go:319] 
	I1109 14:15:09.628090  280419 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 14:15:09.628095  280419 kubeadm.go:319] 
	I1109 14:15:09.628190  280419 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 14:15:09.628196  280419 kubeadm.go:319] 
	I1109 14:15:09.628226  280419 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 14:15:09.628297  280419 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 14:15:09.628361  280419 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 14:15:09.628366  280419 kubeadm.go:319] 
	I1109 14:15:09.628432  280419 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 14:15:09.628438  280419 kubeadm.go:319] 
	I1109 14:15:09.628494  280419 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 14:15:09.628499  280419 kubeadm.go:319] 
	I1109 14:15:09.628563  280419 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 14:15:09.628688  280419 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 14:15:09.628768  280419 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 14:15:09.628774  280419 kubeadm.go:319] 
	I1109 14:15:09.628876  280419 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 14:15:09.628974  280419 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 14:15:09.628980  280419 kubeadm.go:319] 
	I1109 14:15:09.629088  280419 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token dy2agk.tr0oebul2kwwo3mm \
	I1109 14:15:09.629212  280419 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f \
	I1109 14:15:09.629238  280419 kubeadm.go:319] 	--control-plane 
	I1109 14:15:09.629244  280419 kubeadm.go:319] 
	I1109 14:15:09.629356  280419 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 14:15:09.629361  280419 kubeadm.go:319] 
	I1109 14:15:09.629467  280419 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token dy2agk.tr0oebul2kwwo3mm \
	I1109 14:15:09.629610  280419 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f 
	I1109 14:15:09.629623  280419 cni.go:84] Creating CNI manager for "kindnet"
	I1109 14:15:09.631799  280419 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1109 14:15:06.718913  285057 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/calico-593530/proxy-client.crt ...
	I1109 14:15:06.718949  285057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/calico-593530/proxy-client.crt: {Name:mk8405f7379ebaff761135288cc47f11de920497 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:06.719144  285057 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/calico-593530/proxy-client.key ...
	I1109 14:15:06.719172  285057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/calico-593530/proxy-client.key: {Name:mk54c17d3f53ae6d6e4d043d25e865cefdb18ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:06.719450  285057 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem (1338 bytes)
	W1109 14:15:06.719505  285057 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365_empty.pem, impossibly tiny 0 bytes
	I1109 14:15:06.719521  285057 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:15:06.719552  285057 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 14:15:06.719654  285057 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:15:06.719701  285057 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 14:15:06.719756  285057 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:15:06.720506  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:15:06.747314  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:15:06.772596  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:15:06.794513  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:15:06.814162  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/calico-593530/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1109 14:15:06.834065  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/calico-593530/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:15:06.851457  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/calico-593530/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:15:06.869828  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/calico-593530/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 14:15:06.889071  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem --> /usr/share/ca-certificates/9365.pem (1338 bytes)
	I1109 14:15:06.910425  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /usr/share/ca-certificates/93652.pem (1708 bytes)
	I1109 14:15:06.930131  285057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:15:06.948590  285057 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:15:06.960731  285057 ssh_runner.go:195] Run: openssl version
	I1109 14:15:06.966872  285057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9365.pem && ln -fs /usr/share/ca-certificates/9365.pem /etc/ssl/certs/9365.pem"
	I1109 14:15:06.974823  285057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9365.pem
	I1109 14:15:06.978195  285057 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:34 /usr/share/ca-certificates/9365.pem
	I1109 14:15:06.978237  285057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9365.pem
	I1109 14:15:07.020774  285057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9365.pem /etc/ssl/certs/51391683.0"
	I1109 14:15:07.037155  285057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93652.pem && ln -fs /usr/share/ca-certificates/93652.pem /etc/ssl/certs/93652.pem"
	I1109 14:15:07.048994  285057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93652.pem
	I1109 14:15:07.053326  285057 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:34 /usr/share/ca-certificates/93652.pem
	I1109 14:15:07.053381  285057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93652.pem
	I1109 14:15:07.092088  285057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93652.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:15:07.100971  285057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:15:07.109171  285057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:15:07.113081  285057 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:15:07.113125  285057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:15:07.148602  285057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:15:07.156615  285057 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:15:07.160159  285057 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:15:07.160212  285057 kubeadm.go:401] StartCluster: {Name:calico-593530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-593530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:15:07.160295  285057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:15:07.160337  285057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:15:07.186892  285057 cri.go:89] found id: ""
	I1109 14:15:07.186946  285057 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:15:07.194455  285057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:15:07.202432  285057 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:15:07.202479  285057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:15:07.213867  285057 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:15:07.213887  285057 kubeadm.go:158] found existing configuration files:
	
	I1109 14:15:07.213940  285057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 14:15:07.222241  285057 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:15:07.222288  285057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:15:07.229031  285057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 14:15:07.236011  285057 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:15:07.236050  285057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:15:07.242534  285057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 14:15:07.249612  285057 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:15:07.249674  285057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:15:07.256235  285057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 14:15:07.263172  285057 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:15:07.263206  285057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:15:07.270159  285057 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:15:07.331249  285057 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1109 14:15:07.413688  285057 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 14:15:09.388811  287405 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:15:09.388836  287405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:15:09.388883  287405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:15:09.418557  287405 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:15:09.418582  287405 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:15:09.418634  287405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-326524
	I1109 14:15:09.431172  287405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa Username:docker}
	I1109 14:15:09.435076  287405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa Username:docker}
	I1109 14:15:09.457734  287405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/default-k8s-diff-port-326524/id_rsa Username:docker}
	I1109 14:15:09.537561  287405 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:15:09.556995  287405 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-326524" to be "Ready" ...
	I1109 14:15:09.589031  287405 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1109 14:15:09.589061  287405 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1109 14:15:09.592931  287405 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:15:09.602283  287405 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:15:09.616485  287405 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1109 14:15:09.616505  287405 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1109 14:15:09.642211  287405 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1109 14:15:09.642246  287405 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1109 14:15:09.668015  287405 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1109 14:15:09.668035  287405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1109 14:15:09.709885  287405 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1109 14:15:09.709963  287405 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1109 14:15:09.727497  287405 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1109 14:15:09.727521  287405 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1109 14:15:09.750226  287405 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1109 14:15:09.750248  287405 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1109 14:15:09.768986  287405 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1109 14:15:09.769008  287405 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1109 14:15:09.787904  287405 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:15:09.787927  287405 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1109 14:15:09.810073  287405 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:15:11.300008  287405 node_ready.go:49] node "default-k8s-diff-port-326524" is "Ready"
	I1109 14:15:11.300040  287405 node_ready.go:38] duration metric: took 1.742965251s for node "default-k8s-diff-port-326524" to be "Ready" ...
	I1109 14:15:11.300056  287405 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:15:11.300105  287405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:15:12.002766  287405 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.40979526s)
	I1109 14:15:12.002831  287405 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.400519777s)
	I1109 14:15:12.002983  287405 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.192863732s)
	I1109 14:15:12.003102  287405 api_server.go:72] duration metric: took 2.65905681s to wait for apiserver process to appear ...
	I1109 14:15:12.003119  287405 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:15:12.003140  287405 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1109 14:15:12.005628  287405 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-326524 addons enable metrics-server
	
	I1109 14:15:12.009263  287405 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:15:12.009299  287405 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:15:12.010799  287405 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1109 14:15:09.633271  280419 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 14:15:09.638605  280419 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 14:15:09.640169  280419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1109 14:15:09.663440  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 14:15:10.050955  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:10.051005  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-593530 minikube.k8s.io/updated_at=2025_11_09T14_15_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=kindnet-593530 minikube.k8s.io/primary=true
	I1109 14:15:10.051034  280419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:15:10.177520  280419 ops.go:34] apiserver oom_adj: -16
	I1109 14:15:10.177618  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:10.678358  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:11.177786  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:11.677754  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:12.178549  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:09.564720  292305 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1109 14:15:09.564951  292305 start.go:159] libmachine.API.Create for "custom-flannel-593530" (driver="docker")
	I1109 14:15:09.564986  292305 client.go:173] LocalClient.Create starting
	I1109 14:15:09.565056  292305 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem
	I1109 14:15:09.565105  292305 main.go:143] libmachine: Decoding PEM data...
	I1109 14:15:09.565126  292305 main.go:143] libmachine: Parsing certificate...
	I1109 14:15:09.565189  292305 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem
	I1109 14:15:09.565216  292305 main.go:143] libmachine: Decoding PEM data...
	I1109 14:15:09.565233  292305 main.go:143] libmachine: Parsing certificate...
	I1109 14:15:09.565620  292305 cli_runner.go:164] Run: docker network inspect custom-flannel-593530 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 14:15:09.591661  292305 cli_runner.go:211] docker network inspect custom-flannel-593530 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 14:15:09.591745  292305 network_create.go:284] running [docker network inspect custom-flannel-593530] to gather additional debugging logs...
	I1109 14:15:09.591764  292305 cli_runner.go:164] Run: docker network inspect custom-flannel-593530
	W1109 14:15:09.614454  292305 cli_runner.go:211] docker network inspect custom-flannel-593530 returned with exit code 1
	I1109 14:15:09.614482  292305 network_create.go:287] error running [docker network inspect custom-flannel-593530]: docker network inspect custom-flannel-593530: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-593530 not found
	I1109 14:15:09.614498  292305 network_create.go:289] output of [docker network inspect custom-flannel-593530]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-593530 not found
	
	** /stderr **
	I1109 14:15:09.614627  292305 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:15:09.640818  292305 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c085420cd01a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:dd:1a:ae:de:73} reservation:<nil>}
	I1109 14:15:09.641739  292305 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-227a1511ff5a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4a:00:84:88:a9:17} reservation:<nil>}
	I1109 14:15:09.642760  292305 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9196665a99b4 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:6c:f7:d0:28:4f} reservation:<nil>}
	I1109 14:15:09.643403  292305 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e84b4000fff1 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:9e:5b:47:b5:f4} reservation:<nil>}
	I1109 14:15:09.644167  292305 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-1418d8b0aecf IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:96:45:f5:f6:93:a3} reservation:<nil>}
	I1109 14:15:09.645056  292305 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-2d9896d17cc8 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:0e:b1:bc:bf:18:60} reservation:<nil>}
	I1109 14:15:09.646141  292305 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f05260}
	I1109 14:15:09.646224  292305 network_create.go:124] attempt to create docker network custom-flannel-593530 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1109 14:15:09.646294  292305 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-593530 custom-flannel-593530
	I1109 14:15:09.732382  292305 network_create.go:108] docker network custom-flannel-593530 192.168.103.0/24 created
	I1109 14:15:09.732444  292305 kic.go:121] calculated static IP "192.168.103.2" for the "custom-flannel-593530" container
	I1109 14:15:09.732511  292305 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 14:15:09.757786  292305 cli_runner.go:164] Run: docker volume create custom-flannel-593530 --label name.minikube.sigs.k8s.io=custom-flannel-593530 --label created_by.minikube.sigs.k8s.io=true
	I1109 14:15:09.789747  292305 oci.go:103] Successfully created a docker volume custom-flannel-593530
	I1109 14:15:09.789853  292305 cli_runner.go:164] Run: docker run --rm --name custom-flannel-593530-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-593530 --entrypoint /usr/bin/test -v custom-flannel-593530:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 14:15:10.324279  292305 oci.go:107] Successfully prepared a docker volume custom-flannel-593530
	I1109 14:15:10.324372  292305 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:15:10.324389  292305 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 14:15:10.324490  292305 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-593530:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 14:15:12.678165  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:13.177708  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:13.677938  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:14.178225  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:14.677756  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:15.178130  280419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:15.301383  280419 kubeadm.go:1114] duration metric: took 5.250490884s to wait for elevateKubeSystemPrivileges
	I1109 14:15:15.301418  280419 kubeadm.go:403] duration metric: took 17.11607512s to StartCluster
	I1109 14:15:15.301437  280419 settings.go:142] acquiring lock: {Name:mk4e77b290eba64589404bd2a5f48c72505ab262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:15.301499  280419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:15:15.303477  280419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:15.303865  280419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 14:15:15.304287  280419 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:15:15.304563  280419 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:15:15.304684  280419 addons.go:70] Setting storage-provisioner=true in profile "kindnet-593530"
	I1109 14:15:15.304706  280419 addons.go:239] Setting addon storage-provisioner=true in "kindnet-593530"
	I1109 14:15:15.304724  280419 addons.go:70] Setting default-storageclass=true in profile "kindnet-593530"
	I1109 14:15:15.304735  280419 host.go:66] Checking if "kindnet-593530" exists ...
	I1109 14:15:15.304739  280419 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-593530"
	I1109 14:15:15.305102  280419 cli_runner.go:164] Run: docker container inspect kindnet-593530 --format={{.State.Status}}
	I1109 14:15:15.305387  280419 config.go:182] Loaded profile config "kindnet-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:15:15.305870  280419 cli_runner.go:164] Run: docker container inspect kindnet-593530 --format={{.State.Status}}
	I1109 14:15:15.308077  280419 out.go:179] * Verifying Kubernetes components...
	I1109 14:15:15.309627  280419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:15:15.349070  280419 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:15:15.350504  280419 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:15:15.350574  280419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:15:15.350738  280419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-593530
	I1109 14:15:15.352024  280419 addons.go:239] Setting addon default-storageclass=true in "kindnet-593530"
	I1109 14:15:15.352134  280419 host.go:66] Checking if "kindnet-593530" exists ...
	I1109 14:15:15.353153  280419 cli_runner.go:164] Run: docker container inspect kindnet-593530 --format={{.State.Status}}
	I1109 14:15:15.388873  280419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/kindnet-593530/id_rsa Username:docker}
	I1109 14:15:15.395087  280419 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:15:15.395108  280419 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:15:15.395252  280419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-593530
	I1109 14:15:15.427397  280419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/kindnet-593530/id_rsa Username:docker}
	I1109 14:15:15.443429  280419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 14:15:15.559988  280419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:15:15.570733  280419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:15:15.573533  280419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:15:15.787685  280419 node_ready.go:35] waiting up to 15m0s for node "kindnet-593530" to be "Ready" ...
	I1109 14:15:15.787972  280419 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1109 14:15:16.178152  280419 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1109 14:15:12.011923  287405 addons.go:515] duration metric: took 2.667762886s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1109 14:15:12.504017  287405 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1109 14:15:12.509069  287405 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1109 14:15:12.510160  287405 api_server.go:141] control plane version: v1.34.1
	I1109 14:15:12.510182  287405 api_server.go:131] duration metric: took 507.056828ms to wait for apiserver health ...
	I1109 14:15:12.510193  287405 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:15:12.513951  287405 system_pods.go:59] 8 kube-system pods found
	I1109 14:15:12.513979  287405 system_pods.go:61] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:12.513988  287405 system_pods.go:61] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:15:12.513995  287405 system_pods.go:61] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:15:12.514002  287405 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:15:12.514008  287405 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:15:12.514013  287405 system_pods.go:61] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1109 14:15:12.514018  287405 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:15:12.514026  287405 system_pods.go:61] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:12.514031  287405 system_pods.go:74] duration metric: took 3.833097ms to wait for pod list to return data ...
	I1109 14:15:12.514041  287405 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:15:12.516369  287405 default_sa.go:45] found service account: "default"
	I1109 14:15:12.516389  287405 default_sa.go:55] duration metric: took 2.34269ms for default service account to be created ...
	I1109 14:15:12.516398  287405 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:15:12.518769  287405 system_pods.go:86] 8 kube-system pods found
	I1109 14:15:12.518795  287405 system_pods.go:89] "coredns-66bc5c9577-z8lkx" [2a7e151f-1d30-4932-acb4-60f6c560cc8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:12.518806  287405 system_pods.go:89] "etcd-default-k8s-diff-port-326524" [25701bc5-9aef-490f-b0cf-5e487621fc8a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:15:12.518816  287405 system_pods.go:89] "kindnet-fdxsl" [4c264413-e8be-44cf-97d3-3fbdc1ca9aa9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:15:12.518826  287405 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-326524" [776ac050-a5e6-466c-aff7-0b4fa416d707] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:15:12.518834  287405 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-326524" [452a02c2-1f3f-4c1a-8430-7dbf27daccb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:15:12.518843  287405 system_pods.go:89] "kube-proxy-n95wb" [39336fb9-1647-458b-802a-16247e882272] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1109 14:15:12.518851  287405 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-326524" [a5a8f717-0eb7-4cd2-bcba-4a3ee671203c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:15:12.518868  287405 system_pods.go:89] "storage-provisioner" [75f7d5d8-7cac-41ea-9ada-b9f96eaab5f6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:12.518891  287405 system_pods.go:126] duration metric: took 2.480872ms to wait for k8s-apps to be running ...
	I1109 14:15:12.518899  287405 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:15:12.518990  287405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:15:12.536796  287405 system_svc.go:56] duration metric: took 17.880309ms WaitForService to wait for kubelet
	I1109 14:15:12.536833  287405 kubeadm.go:587] duration metric: took 3.192786831s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:15:12.536858  287405 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:15:12.539187  287405 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:15:12.539213  287405 node_conditions.go:123] node cpu capacity is 8
	I1109 14:15:12.539226  287405 node_conditions.go:105] duration metric: took 2.362428ms to run NodePressure ...
	I1109 14:15:12.539241  287405 start.go:242] waiting for startup goroutines ...
	I1109 14:15:12.539254  287405 start.go:247] waiting for cluster config update ...
	I1109 14:15:12.539271  287405 start.go:256] writing updated cluster config ...
	I1109 14:15:12.539546  287405 ssh_runner.go:195] Run: rm -f paused
	I1109 14:15:12.543827  287405 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:15:12.546987  287405 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z8lkx" in "kube-system" namespace to be "Ready" or be gone ...
	W1109 14:15:14.552448  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:16.556111  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	I1109 14:15:16.179775  280419 addons.go:515] duration metric: took 875.218856ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1109 14:15:16.296630  280419 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-593530" context rescaled to 1 replicas
	I1109 14:15:15.183466  292305 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-593530:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.858927993s)
	I1109 14:15:15.183557  292305 kic.go:203] duration metric: took 4.85916342s to extract preloaded images to volume ...
	W1109 14:15:15.183742  292305 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1109 14:15:15.183798  292305 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1109 14:15:15.183845  292305 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 14:15:15.278360  292305 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-593530 --name custom-flannel-593530 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-593530 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-593530 --network custom-flannel-593530 --ip 192.168.103.2 --volume custom-flannel-593530:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 14:15:15.791683  292305 cli_runner.go:164] Run: docker container inspect custom-flannel-593530 --format={{.State.Running}}
	I1109 14:15:15.823726  292305 cli_runner.go:164] Run: docker container inspect custom-flannel-593530 --format={{.State.Status}}
	I1109 14:15:15.852750  292305 cli_runner.go:164] Run: docker exec custom-flannel-593530 stat /var/lib/dpkg/alternatives/iptables
	I1109 14:15:15.926392  292305 oci.go:144] the created container "custom-flannel-593530" has a running status.
	I1109 14:15:15.926496  292305 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/custom-flannel-593530/id_rsa...
	I1109 14:15:16.143382  292305 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-5854/.minikube/machines/custom-flannel-593530/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 14:15:16.192826  292305 cli_runner.go:164] Run: docker container inspect custom-flannel-593530 --format={{.State.Status}}
	I1109 14:15:16.230105  292305 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 14:15:16.230125  292305 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-593530 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 14:15:16.307802  292305 cli_runner.go:164] Run: docker container inspect custom-flannel-593530 --format={{.State.Status}}
	I1109 14:15:16.339060  292305 machine.go:94] provisionDockerMachine start ...
	I1109 14:15:16.339158  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:16.365882  292305 main.go:143] libmachine: Using SSH client type: native
	I1109 14:15:16.366245  292305 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33120 <nil> <nil>}
	I1109 14:15:16.366261  292305 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:15:16.367251  292305 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37368->127.0.0.1:33120: read: connection reset by peer
	I1109 14:15:20.489971  285057 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 14:15:20.490049  285057 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:15:20.490188  285057 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 14:15:20.490270  285057 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1109 14:15:20.490317  285057 kubeadm.go:319] OS: Linux
	I1109 14:15:20.490421  285057 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 14:15:20.490501  285057 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 14:15:20.490546  285057 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 14:15:20.490607  285057 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 14:15:20.490689  285057 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 14:15:20.490755  285057 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 14:15:20.490829  285057 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 14:15:20.490899  285057 kubeadm.go:319] CGROUPS_IO: enabled
	I1109 14:15:20.490993  285057 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:15:20.491141  285057 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:15:20.491265  285057 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 14:15:20.491335  285057 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:15:20.492697  285057 out.go:252]   - Generating certificates and keys ...
	I1109 14:15:20.492795  285057 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:15:20.492891  285057 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:15:20.492998  285057 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 14:15:20.493096  285057 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 14:15:20.493217  285057 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 14:15:20.493288  285057 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 14:15:20.493363  285057 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 14:15:20.493513  285057 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-593530 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1109 14:15:20.493584  285057 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 14:15:20.493745  285057 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-593530 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1109 14:15:20.493833  285057 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 14:15:20.493923  285057 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 14:15:20.493983  285057 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 14:15:20.494059  285057 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:15:20.494137  285057 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:15:20.494213  285057 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 14:15:20.494295  285057 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:15:20.494383  285057 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:15:20.494491  285057 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:15:20.494625  285057 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:15:20.494749  285057 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:15:20.496287  285057 out.go:252]   - Booting up control plane ...
	I1109 14:15:20.496383  285057 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:15:20.496489  285057 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:15:20.496580  285057 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:15:20.496745  285057 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:15:20.496885  285057 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 14:15:20.497027  285057 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 14:15:20.497145  285057 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:15:20.497206  285057 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:15:20.497398  285057 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 14:15:20.497544  285057 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 14:15:20.497634  285057 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001183215s
	I1109 14:15:20.497783  285057 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 14:15:20.497899  285057 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1109 14:15:20.498049  285057 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 14:15:20.498180  285057 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 14:15:20.498288  285057 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.18698712s
	I1109 14:15:20.498398  285057 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.533905838s
	I1109 14:15:20.498521  285057 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001883071s
	I1109 14:15:20.498699  285057 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 14:15:20.498867  285057 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 14:15:20.498955  285057 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 14:15:20.499232  285057 kubeadm.go:319] [mark-control-plane] Marking the node calico-593530 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 14:15:20.499328  285057 kubeadm.go:319] [bootstrap-token] Using token: yjsvjs.iphyvpgb7olgu2sq
	I1109 14:15:20.500606  285057 out.go:252]   - Configuring RBAC rules ...
	I1109 14:15:20.500765  285057 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 14:15:20.500886  285057 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 14:15:20.501091  285057 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 14:15:20.501272  285057 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 14:15:20.501443  285057 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 14:15:20.501573  285057 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 14:15:20.501779  285057 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 14:15:20.501846  285057 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 14:15:20.501913  285057 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 14:15:20.501923  285057 kubeadm.go:319] 
	I1109 14:15:20.502013  285057 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 14:15:20.502023  285057 kubeadm.go:319] 
	I1109 14:15:20.502142  285057 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 14:15:20.502151  285057 kubeadm.go:319] 
	I1109 14:15:20.502186  285057 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 14:15:20.502291  285057 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 14:15:20.502371  285057 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 14:15:20.502378  285057 kubeadm.go:319] 
	I1109 14:15:20.502448  285057 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 14:15:20.502464  285057 kubeadm.go:319] 
	I1109 14:15:20.502527  285057 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 14:15:20.502536  285057 kubeadm.go:319] 
	I1109 14:15:20.502601  285057 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 14:15:20.502724  285057 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 14:15:20.502832  285057 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 14:15:20.502846  285057 kubeadm.go:319] 
	I1109 14:15:20.502986  285057 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 14:15:20.503103  285057 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 14:15:20.503113  285057 kubeadm.go:319] 
	I1109 14:15:20.503231  285057 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token yjsvjs.iphyvpgb7olgu2sq \
	I1109 14:15:20.503385  285057 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f \
	I1109 14:15:20.503422  285057 kubeadm.go:319] 	--control-plane 
	I1109 14:15:20.503431  285057 kubeadm.go:319] 
	I1109 14:15:20.503543  285057 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 14:15:20.503553  285057 kubeadm.go:319] 
	I1109 14:15:20.503687  285057 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token yjsvjs.iphyvpgb7olgu2sq \
	I1109 14:15:20.503827  285057 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f 
	I1109 14:15:20.503838  285057 cni.go:84] Creating CNI manager for "calico"
	I1109 14:15:20.507565  285057 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1109 14:15:20.509507  285057 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 14:15:20.509531  285057 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329845 bytes)
	I1109 14:15:20.524304  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 14:15:21.544389  285057 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.020047162s)
	I1109 14:15:21.544429  285057 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:15:21.544571  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:21.544686  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-593530 minikube.k8s.io/updated_at=2025_11_09T14_15_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=calico-593530 minikube.k8s.io/primary=true
	I1109 14:15:21.655688  285057 ops.go:34] apiserver oom_adj: -16
	I1109 14:15:21.655781  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1109 14:15:19.052783  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:21.058845  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:17.791410  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	W1109 14:15:19.792408  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	W1109 14:15:21.792804  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	I1109 14:15:19.509033  292305 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-593530
	
	I1109 14:15:19.509062  292305 ubuntu.go:182] provisioning hostname "custom-flannel-593530"
	I1109 14:15:19.509131  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:19.534864  292305 main.go:143] libmachine: Using SSH client type: native
	I1109 14:15:19.536130  292305 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33120 <nil> <nil>}
	I1109 14:15:19.536154  292305 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-593530 && echo "custom-flannel-593530" | sudo tee /etc/hostname
	I1109 14:15:19.700709  292305 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-593530
	
	I1109 14:15:19.700801  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:19.722041  292305 main.go:143] libmachine: Using SSH client type: native
	I1109 14:15:19.722426  292305 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33120 <nil> <nil>}
	I1109 14:15:19.722454  292305 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-593530' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-593530/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-593530' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:15:19.869356  292305 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:15:19.869395  292305 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-5854/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-5854/.minikube}
	I1109 14:15:19.869419  292305 ubuntu.go:190] setting up certificates
	I1109 14:15:19.869433  292305 provision.go:84] configureAuth start
	I1109 14:15:19.869485  292305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-593530
	I1109 14:15:19.892501  292305 provision.go:143] copyHostCerts
	I1109 14:15:19.892569  292305 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem, removing ...
	I1109 14:15:19.892585  292305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem
	I1109 14:15:19.892751  292305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/key.pem (1679 bytes)
	I1109 14:15:19.892940  292305 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem, removing ...
	I1109 14:15:19.892955  292305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem
	I1109 14:15:19.893768  292305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/ca.pem (1078 bytes)
	I1109 14:15:19.893998  292305 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem, removing ...
	I1109 14:15:19.894016  292305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem
	I1109 14:15:19.894067  292305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-5854/.minikube/cert.pem (1123 bytes)
	I1109 14:15:19.894169  292305 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-593530 san=[127.0.0.1 192.168.103.2 custom-flannel-593530 localhost minikube]
	I1109 14:15:20.152110  292305 provision.go:177] copyRemoteCerts
	I1109 14:15:20.152180  292305 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:15:20.152294  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:20.178378  292305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/custom-flannel-593530/id_rsa Username:docker}
	I1109 14:15:20.280436  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 14:15:20.303490  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1109 14:15:20.326112  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:15:20.348207  292305 provision.go:87] duration metric: took 478.760759ms to configureAuth
	I1109 14:15:20.348237  292305 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:15:20.348418  292305 config.go:182] Loaded profile config "custom-flannel-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:15:20.348540  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:20.370860  292305 main.go:143] libmachine: Using SSH client type: native
	I1109 14:15:20.371177  292305 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33120 <nil> <nil>}
	I1109 14:15:20.371202  292305 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:15:20.650578  292305 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:15:20.650604  292305 machine.go:97] duration metric: took 4.311520822s to provisionDockerMachine
	I1109 14:15:20.650617  292305 client.go:176] duration metric: took 11.085624819s to LocalClient.Create
	I1109 14:15:20.650630  292305 start.go:167] duration metric: took 11.08567915s to libmachine.API.Create "custom-flannel-593530"
	I1109 14:15:20.650665  292305 start.go:293] postStartSetup for "custom-flannel-593530" (driver="docker")
	I1109 14:15:20.650680  292305 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:15:20.650755  292305 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:15:20.650820  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:20.675138  292305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/custom-flannel-593530/id_rsa Username:docker}
	I1109 14:15:20.784389  292305 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:15:20.789100  292305 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:15:20.789133  292305 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:15:20.789146  292305 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/addons for local assets ...
	I1109 14:15:20.789200  292305 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-5854/.minikube/files for local assets ...
	I1109 14:15:20.789306  292305 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem -> 93652.pem in /etc/ssl/certs
	I1109 14:15:20.789423  292305 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:15:20.799678  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:15:20.826942  292305 start.go:296] duration metric: took 176.260456ms for postStartSetup
	I1109 14:15:20.827473  292305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-593530
	I1109 14:15:20.853401  292305 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/config.json ...
	I1109 14:15:20.853846  292305 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:15:20.853909  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:20.880560  292305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/custom-flannel-593530/id_rsa Username:docker}
	I1109 14:15:20.985667  292305 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:15:20.991818  292305 start.go:128] duration metric: took 11.42871736s to createHost
	I1109 14:15:20.991872  292305 start.go:83] releasing machines lock for "custom-flannel-593530", held for 11.428883095s
	I1109 14:15:20.991956  292305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-593530
	I1109 14:15:21.018112  292305 ssh_runner.go:195] Run: cat /version.json
	I1109 14:15:21.018179  292305 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:15:21.018195  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:21.018261  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:21.043668  292305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/custom-flannel-593530/id_rsa Username:docker}
	I1109 14:15:21.047050  292305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/custom-flannel-593530/id_rsa Username:docker}
	I1109 14:15:21.149970  292305 ssh_runner.go:195] Run: systemctl --version
	I1109 14:15:21.237272  292305 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:15:21.292179  292305 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:15:21.300946  292305 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:15:21.301014  292305 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:15:21.349114  292305 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1109 14:15:21.349142  292305 start.go:496] detecting cgroup driver to use...
	I1109 14:15:21.349177  292305 detect.go:190] detected "systemd" cgroup driver on host os
	I1109 14:15:21.349229  292305 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:15:21.373055  292305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:15:21.393167  292305 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:15:21.393343  292305 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:15:21.420863  292305 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:15:21.457951  292305 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:15:21.581499  292305 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:15:21.705285  292305 docker.go:234] disabling docker service ...
	I1109 14:15:21.705373  292305 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:15:21.730862  292305 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:15:21.749073  292305 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:15:21.881211  292305 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:15:21.999688  292305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:15:22.017667  292305 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:15:22.034631  292305 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:15:22.034707  292305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:15:22.134259  292305 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1109 14:15:22.134338  292305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:15:22.167652  292305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:15:22.179909  292305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:15:22.216274  292305 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:15:22.226166  292305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:15:22.244315  292305 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:15:22.293992  292305 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:15:22.304234  292305 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:15:22.311974  292305 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:15:22.319571  292305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:15:22.409241  292305 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:15:23.021268  292305 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:15:23.021335  292305 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:15:23.025716  292305 start.go:564] Will wait 60s for crictl version
	I1109 14:15:23.025767  292305 ssh_runner.go:195] Run: which crictl
	I1109 14:15:23.029924  292305 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:15:23.056756  292305 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:15:23.056835  292305 ssh_runner.go:195] Run: crio --version
	I1109 14:15:23.083822  292305 ssh_runner.go:195] Run: crio --version
	I1109 14:15:23.111840  292305 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:15:23.112812  292305 cli_runner.go:164] Run: docker network inspect custom-flannel-593530 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:15:23.129398  292305 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1109 14:15:23.133371  292305 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:15:23.144139  292305 kubeadm.go:884] updating cluster {Name:custom-flannel-593530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-593530 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCore
DNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:15:23.144265  292305 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:15:23.144312  292305 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:15:23.176128  292305 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:15:23.176149  292305 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:15:23.176195  292305 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:15:23.202207  292305 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:15:23.202230  292305 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:15:23.202239  292305 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1109 14:15:23.202354  292305 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-593530 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-593530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1109 14:15:23.202432  292305 ssh_runner.go:195] Run: crio config
	I1109 14:15:23.249086  292305 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1109 14:15:23.249126  292305 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:15:23.249153  292305 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-593530 NodeName:custom-flannel-593530 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:15:23.249292  292305 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-593530"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:15:23.249347  292305 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:15:23.258175  292305 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:15:23.258228  292305 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:15:23.265460  292305 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1109 14:15:23.277235  292305 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:15:23.291948  292305 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1109 14:15:23.303747  292305 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:15:23.307115  292305 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:15:23.316244  292305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:15:23.396667  292305 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:15:23.417622  292305 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530 for IP: 192.168.103.2
	I1109 14:15:23.417652  292305 certs.go:195] generating shared ca certs ...
	I1109 14:15:23.417670  292305 certs.go:227] acquiring lock for ca certs: {Name:mk1fbc7fb5aaf0e87090d75145f91c095ef07289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:23.417825  292305 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key
	I1109 14:15:23.417874  292305 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key
	I1109 14:15:23.417887  292305 certs.go:257] generating profile certs ...
	I1109 14:15:23.417955  292305 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/client.key
	I1109 14:15:23.417971  292305 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/client.crt with IP's: []
	I1109 14:15:23.475470  292305 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/client.crt ...
	I1109 14:15:23.475495  292305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/client.crt: {Name:mk6cc8a56c5a7e03bae4f26e654eb21732b60f96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:23.475666  292305 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/client.key ...
	I1109 14:15:23.475688  292305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/client.key: {Name:mkd32921880ae6490d9b36f6589b11af2e82bda4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:23.475808  292305 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.key.bd96a32b
	I1109 14:15:23.475837  292305 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.crt.bd96a32b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1109 14:15:23.507789  292305 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.crt.bd96a32b ...
	I1109 14:15:23.507814  292305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.crt.bd96a32b: {Name:mkea1975b9862b4f62d0e1cfe3f59dac63fdc488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:23.507963  292305 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.key.bd96a32b ...
	I1109 14:15:23.507982  292305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.key.bd96a32b: {Name:mkd8316faf286f0a2a7f529b2fea1fdabd61ffa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:23.508079  292305 certs.go:382] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.crt.bd96a32b -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.crt
	I1109 14:15:23.508183  292305 certs.go:386] copying /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.key.bd96a32b -> /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.key
	I1109 14:15:23.508266  292305 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/proxy-client.key
	I1109 14:15:23.508290  292305 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/proxy-client.crt with IP's: []
	I1109 14:15:24.163784  292305 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/proxy-client.crt ...
	I1109 14:15:24.163808  292305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/proxy-client.crt: {Name:mkdc1d9208a395139efe0f54f1eb35bd3a932934 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:24.163955  292305 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/proxy-client.key ...
	I1109 14:15:24.163970  292305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/proxy-client.key: {Name:mka27109506b5085edf8a42f4a73129a9eb93eb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:24.164130  292305 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem (1338 bytes)
	W1109 14:15:24.164173  292305 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365_empty.pem, impossibly tiny 0 bytes
	I1109 14:15:24.164187  292305 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:15:24.164217  292305 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/ca.pem (1078 bytes)
	I1109 14:15:24.164244  292305 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:15:24.164265  292305 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/certs/key.pem (1679 bytes)
	I1109 14:15:24.164303  292305 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem (1708 bytes)
	I1109 14:15:24.164883  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:15:24.182788  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 14:15:24.200476  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:15:24.220455  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:15:24.237975  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1109 14:15:22.156614  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:22.656717  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:23.156239  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:23.656838  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:24.156499  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:24.656141  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:25.156834  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:25.656459  285057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:25.739531  285057 kubeadm.go:1114] duration metric: took 4.195008704s to wait for elevateKubeSystemPrivileges
	I1109 14:15:25.739565  285057 kubeadm.go:403] duration metric: took 18.579357042s to StartCluster
	I1109 14:15:25.739586  285057 settings.go:142] acquiring lock: {Name:mk4e77b290eba64589404bd2a5f48c72505ab262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:25.739699  285057 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:15:25.741526  285057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:25.741787  285057 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:15:25.741826  285057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 14:15:25.742027  285057 config.go:182] Loaded profile config "calico-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:15:25.741976  285057 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:15:25.742052  285057 addons.go:70] Setting storage-provisioner=true in profile "calico-593530"
	I1109 14:15:25.742069  285057 addons.go:70] Setting default-storageclass=true in profile "calico-593530"
	I1109 14:15:25.742071  285057 addons.go:239] Setting addon storage-provisioner=true in "calico-593530"
	I1109 14:15:25.742081  285057 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-593530"
	I1109 14:15:25.742103  285057 host.go:66] Checking if "calico-593530" exists ...
	I1109 14:15:25.742807  285057 cli_runner.go:164] Run: docker container inspect calico-593530 --format={{.State.Status}}
	I1109 14:15:25.743008  285057 cli_runner.go:164] Run: docker container inspect calico-593530 --format={{.State.Status}}
	I1109 14:15:25.745013  285057 out.go:179] * Verifying Kubernetes components...
	I1109 14:15:25.746081  285057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:15:25.771718  285057 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:15:25.772793  285057 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:15:25.772810  285057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:15:25.772865  285057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-593530
	I1109 14:15:25.774122  285057 addons.go:239] Setting addon default-storageclass=true in "calico-593530"
	I1109 14:15:25.774172  285057 host.go:66] Checking if "calico-593530" exists ...
	I1109 14:15:25.774670  285057 cli_runner.go:164] Run: docker container inspect calico-593530 --format={{.State.Status}}
	I1109 14:15:25.801675  285057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/calico-593530/id_rsa Username:docker}
	I1109 14:15:25.806922  285057 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:15:25.806945  285057 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:15:25.807008  285057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-593530
	I1109 14:15:25.835285  285057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/calico-593530/id_rsa Username:docker}
	I1109 14:15:25.874911  285057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 14:15:25.927388  285057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:15:25.938215  285057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:15:25.951126  285057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:15:26.058735  285057 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1109 14:15:26.230438  285057 node_ready.go:35] waiting up to 15m0s for node "calico-593530" to be "Ready" ...
	I1109 14:15:26.234893  285057 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1109 14:15:26.235861  285057 addons.go:515] duration metric: took 493.881381ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1109 14:15:26.563938  285057 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-593530" context rescaled to 1 replicas
	W1109 14:15:23.551954  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:25.552357  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:24.291655  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	W1109 14:15:26.791414  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	I1109 14:15:24.257331  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:15:24.273901  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:15:24.290399  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/custom-flannel-593530/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:15:24.307504  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/certs/9365.pem --> /usr/share/ca-certificates/9365.pem (1338 bytes)
	I1109 14:15:24.324968  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/ssl/certs/93652.pem --> /usr/share/ca-certificates/93652.pem (1708 bytes)
	I1109 14:15:24.341724  292305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:15:24.358442  292305 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:15:24.370001  292305 ssh_runner.go:195] Run: openssl version
	I1109 14:15:24.375754  292305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9365.pem && ln -fs /usr/share/ca-certificates/9365.pem /etc/ssl/certs/9365.pem"
	I1109 14:15:24.383447  292305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9365.pem
	I1109 14:15:24.386799  292305 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:34 /usr/share/ca-certificates/9365.pem
	I1109 14:15:24.386842  292305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9365.pem
	I1109 14:15:24.421676  292305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9365.pem /etc/ssl/certs/51391683.0"
	I1109 14:15:24.429446  292305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93652.pem && ln -fs /usr/share/ca-certificates/93652.pem /etc/ssl/certs/93652.pem"
	I1109 14:15:24.437149  292305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93652.pem
	I1109 14:15:24.440861  292305 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:34 /usr/share/ca-certificates/93652.pem
	I1109 14:15:24.440909  292305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93652.pem
	I1109 14:15:24.495759  292305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93652.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:15:24.504738  292305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:15:24.512949  292305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:15:24.516510  292305 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:15:24.516556  292305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:15:24.555449  292305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:15:24.564580  292305 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:15:24.568327  292305 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:15:24.568395  292305 kubeadm.go:401] StartCluster: {Name:custom-flannel-593530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-593530 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:15:24.568463  292305 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:15:24.568515  292305 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:15:24.595272  292305 cri.go:89] found id: ""
	I1109 14:15:24.595332  292305 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:15:24.603201  292305 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:15:24.611034  292305 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:15:24.611084  292305 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:15:24.618591  292305 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:15:24.618605  292305 kubeadm.go:158] found existing configuration files:
	
	I1109 14:15:24.618635  292305 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 14:15:24.625810  292305 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:15:24.625860  292305 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:15:24.632814  292305 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 14:15:24.640133  292305 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:15:24.640173  292305 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:15:24.647571  292305 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 14:15:24.654499  292305 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:15:24.654542  292305 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:15:24.661867  292305 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 14:15:24.668941  292305 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:15:24.668982  292305 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:15:24.676006  292305 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:15:24.737957  292305 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1109 14:15:24.795866  292305 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1109 14:15:28.233281  285057 node_ready.go:57] node "calico-593530" has "Ready":"False" status (will retry)
	W1109 14:15:30.233963  285057 node_ready.go:57] node "calico-593530" has "Ready":"False" status (will retry)
	I1109 14:15:30.734928  285057 node_ready.go:49] node "calico-593530" is "Ready"
	I1109 14:15:30.734960  285057 node_ready.go:38] duration metric: took 4.50449231s for node "calico-593530" to be "Ready" ...
	I1109 14:15:30.734976  285057 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:15:30.735037  285057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:15:30.754867  285057 api_server.go:72] duration metric: took 5.013042529s to wait for apiserver process to appear ...
	I1109 14:15:30.754903  285057 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:15:30.754925  285057 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1109 14:15:30.767548  285057 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1109 14:15:30.769336  285057 api_server.go:141] control plane version: v1.34.1
	I1109 14:15:30.769394  285057 api_server.go:131] duration metric: took 14.481552ms to wait for apiserver health ...
	I1109 14:15:30.769411  285057 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:15:30.776264  285057 system_pods.go:59] 9 kube-system pods found
	I1109 14:15:30.776311  285057 system_pods.go:61] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:30.776328  285057 system_pods.go:61] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [install-cni ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:30.776337  285057 system_pods.go:61] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:30.776347  285057 system_pods.go:61] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:30.776352  285057 system_pods.go:61] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:30.776359  285057 system_pods.go:61] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:15:30.776363  285057 system_pods.go:61] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:30.776368  285057 system_pods.go:61] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:30.776374  285057 system_pods.go:61] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:30.776382  285057 system_pods.go:74] duration metric: took 6.946108ms to wait for pod list to return data ...
	I1109 14:15:30.776392  285057 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:15:30.779990  285057 default_sa.go:45] found service account: "default"
	I1109 14:15:30.780014  285057 default_sa.go:55] duration metric: took 3.615627ms for default service account to be created ...
	I1109 14:15:30.780026  285057 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:15:30.861040  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:30.861076  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:30.861089  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:30.861106  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:30.861113  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:30.861119  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:30.861134  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:15:30.861142  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:30.861149  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:30.861158  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:30.861206  285057 retry.go:31] will retry after 222.389521ms: missing components: kube-dns
	I1109 14:15:31.088074  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:31.088116  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:31.088127  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:31.088181  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:31.088194  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:31.088202  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:31.088207  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:15:31.088211  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:31.088214  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:31.088217  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:31.088233  285057 retry.go:31] will retry after 259.900062ms: missing components: kube-dns
	I1109 14:15:31.356775  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:31.356818  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:31.356838  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:31.356854  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:31.356863  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:31.356871  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:31.356879  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:15:31.356885  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:31.356896  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:31.356902  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:31.356919  285057 retry.go:31] will retry after 380.857905ms: missing components: kube-dns
	W1109 14:15:27.553578  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:30.053185  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:29.291525  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	W1109 14:15:31.293205  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	I1109 14:15:34.760952  292305 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 14:15:34.761035  292305 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:15:34.761189  292305 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 14:15:34.761308  292305 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1109 14:15:34.761393  292305 kubeadm.go:319] OS: Linux
	I1109 14:15:34.761469  292305 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 14:15:34.761536  292305 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 14:15:34.761631  292305 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 14:15:34.761717  292305 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 14:15:34.761788  292305 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 14:15:34.761854  292305 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 14:15:34.761930  292305 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 14:15:34.761994  292305 kubeadm.go:319] CGROUPS_IO: enabled
	I1109 14:15:34.762086  292305 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:15:34.762214  292305 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:15:34.762345  292305 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 14:15:34.762435  292305 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:15:34.853035  292305 out.go:252]   - Generating certificates and keys ...
	I1109 14:15:34.853151  292305 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:15:34.853250  292305 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:15:34.853367  292305 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 14:15:34.853461  292305 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 14:15:34.853555  292305 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 14:15:34.853624  292305 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 14:15:34.853714  292305 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 14:15:34.853887  292305 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-593530 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1109 14:15:34.853992  292305 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 14:15:34.854205  292305 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-593530 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1109 14:15:34.854312  292305 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 14:15:34.854404  292305 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 14:15:34.854470  292305 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 14:15:34.854551  292305 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:15:34.854628  292305 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:15:34.854738  292305 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 14:15:34.854807  292305 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:15:34.854936  292305 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:15:34.855040  292305 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:15:34.855177  292305 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:15:34.855276  292305 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:15:35.003588  292305 out.go:252]   - Booting up control plane ...
	I1109 14:15:35.003746  292305 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:15:35.003873  292305 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:15:35.003979  292305 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:15:35.004143  292305 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:15:35.004300  292305 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 14:15:35.004467  292305 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 14:15:35.004606  292305 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:15:35.004688  292305 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:15:35.004878  292305 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 14:15:35.005037  292305 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 14:15:35.005130  292305 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001593024s
	I1109 14:15:35.005279  292305 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 14:15:35.005395  292305 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1109 14:15:35.005533  292305 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 14:15:35.005675  292305 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 14:15:35.005783  292305 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.592199819s
	I1109 14:15:35.005876  292305 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.59222063s
	I1109 14:15:35.005980  292305 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.50174081s
	I1109 14:15:35.006127  292305 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 14:15:35.006295  292305 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 14:15:35.006376  292305 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 14:15:35.006690  292305 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-593530 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 14:15:35.006770  292305 kubeadm.go:319] [bootstrap-token] Using token: j7ym4d.0t4svojy4g5mzhlf
	I1109 14:15:35.046560  292305 out.go:252]   - Configuring RBAC rules ...
	I1109 14:15:35.046804  292305 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 14:15:35.046919  292305 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 14:15:35.047109  292305 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 14:15:35.047282  292305 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 14:15:35.047479  292305 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 14:15:35.047584  292305 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 14:15:35.047749  292305 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 14:15:35.047823  292305 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 14:15:35.049090  292305 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 14:15:35.049104  292305 kubeadm.go:319] 
	I1109 14:15:35.049183  292305 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 14:15:35.049193  292305 kubeadm.go:319] 
	I1109 14:15:35.049312  292305 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 14:15:35.049320  292305 kubeadm.go:319] 
	I1109 14:15:35.049369  292305 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 14:15:35.049569  292305 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 14:15:35.049762  292305 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 14:15:35.049823  292305 kubeadm.go:319] 
	I1109 14:15:35.049902  292305 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 14:15:35.049914  292305 kubeadm.go:319] 
	I1109 14:15:35.049985  292305 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 14:15:35.049991  292305 kubeadm.go:319] 
	I1109 14:15:35.050071  292305 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 14:15:35.050363  292305 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 14:15:35.050547  292305 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 14:15:35.050571  292305 kubeadm.go:319] 
	I1109 14:15:35.050735  292305 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 14:15:35.050950  292305 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 14:15:35.050963  292305 kubeadm.go:319] 
	I1109 14:15:35.051066  292305 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token j7ym4d.0t4svojy4g5mzhlf \
	I1109 14:15:35.052172  292305 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f \
	I1109 14:15:35.052249  292305 kubeadm.go:319] 	--control-plane 
	I1109 14:15:35.052261  292305 kubeadm.go:319] 
	I1109 14:15:35.052378  292305 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 14:15:35.052388  292305 kubeadm.go:319] 
	I1109 14:15:35.052505  292305 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token j7ym4d.0t4svojy4g5mzhlf \
	I1109 14:15:35.052710  292305 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:08e534015abbea5eeb21d012a1d227312ee7c5c23acf0519e2795e3f65d98b9f 
	I1109 14:15:35.052728  292305 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1109 14:15:35.055275  292305 out.go:179] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I1109 14:15:31.741481  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:31.741516  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:31.741529  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:31.741534  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:31.741538  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:31.741544  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:31.741550  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:15:31.741558  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:31.741563  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:31.741568  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:31.741585  285057 retry.go:31] will retry after 380.777126ms: missing components: kube-dns
	I1109 14:15:32.129801  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:32.129926  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:32.129943  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:32.129952  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:32.129959  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:32.129967  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:32.129975  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:15:32.129981  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:32.130014  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:32.130030  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:32.130056  285057 retry.go:31] will retry after 658.546064ms: missing components: kube-dns
	I1109 14:15:32.792973  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:32.793012  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:32.793023  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:32.793034  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:32.793040  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:32.793048  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:32.793056  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:15:32.793061  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:32.793066  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:32.793071  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:32.793087  285057 retry.go:31] will retry after 852.732952ms: missing components: kube-dns
	I1109 14:15:33.651061  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:33.651101  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:33.651115  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:33.651131  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:33.651138  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:33.651149  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:33.651157  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:15:33.651166  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:33.651172  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:33.651180  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:33.651198  285057 retry.go:31] will retry after 882.469174ms: missing components: kube-dns
	I1109 14:15:34.538792  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:34.538823  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:34.538832  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:34.538838  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:34.538843  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:34.538848  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:34.538851  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running
	I1109 14:15:34.538857  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:34.538860  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:34.538864  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:34.538877  285057 retry.go:31] will retry after 1.018334092s: missing components: kube-dns
	I1109 14:15:35.562102  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:35.562134  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:35.562148  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:35.562158  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:35.562168  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:35.562176  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:35.562181  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running
	I1109 14:15:35.562190  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:35.562196  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:35.562204  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:35.562222  285057 retry.go:31] will retry after 1.779834697s: missing components: kube-dns
	W1109 14:15:32.553177  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:35.054319  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:33.791388  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	W1109 14:15:36.291162  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	I1109 14:15:35.056434  292305 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 14:15:35.056494  292305 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1109 14:15:35.061959  292305 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1109 14:15:35.061986  292305 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I1109 14:15:35.086158  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 14:15:35.460740  292305 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:15:35.460827  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:35.460871  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-593530 minikube.k8s.io/updated_at=2025_11_09T14_15_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=custom-flannel-593530 minikube.k8s.io/primary=true
	I1109 14:15:35.473013  292305 ops.go:34] apiserver oom_adj: -16
	I1109 14:15:35.560089  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:36.060333  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:36.561021  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:37.060850  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:37.560914  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:38.060839  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:38.560383  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:39.061005  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:39.560650  292305 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:15:39.641155  292305 kubeadm.go:1114] duration metric: took 4.180391897s to wait for elevateKubeSystemPrivileges
	I1109 14:15:39.641195  292305 kubeadm.go:403] duration metric: took 15.072805775s to StartCluster
	I1109 14:15:39.641214  292305 settings.go:142] acquiring lock: {Name:mk4e77b290eba64589404bd2a5f48c72505ab262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:39.641288  292305 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:15:39.643360  292305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-5854/kubeconfig: {Name:mk43ee6536ba04a62c5ebecd6dbec4011ee2590e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:15:39.643611  292305 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 14:15:39.643621  292305 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:15:39.643710  292305 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:15:39.643853  292305 config.go:182] Loaded profile config "custom-flannel-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:15:39.643857  292305 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-593530"
	I1109 14:15:39.643883  292305 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-593530"
	I1109 14:15:39.643906  292305 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-593530"
	I1109 14:15:39.643919  292305 host.go:66] Checking if "custom-flannel-593530" exists ...
	I1109 14:15:39.643929  292305 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-593530"
	I1109 14:15:39.644473  292305 cli_runner.go:164] Run: docker container inspect custom-flannel-593530 --format={{.State.Status}}
	I1109 14:15:39.644552  292305 cli_runner.go:164] Run: docker container inspect custom-flannel-593530 --format={{.State.Status}}
	I1109 14:15:39.645098  292305 out.go:179] * Verifying Kubernetes components...
	I1109 14:15:39.646129  292305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:15:39.668733  292305 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:15:39.669229  292305 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-593530"
	I1109 14:15:39.669271  292305 host.go:66] Checking if "custom-flannel-593530" exists ...
	I1109 14:15:39.669786  292305 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:15:39.669808  292305 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:15:39.669853  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:39.669788  292305 cli_runner.go:164] Run: docker container inspect custom-flannel-593530 --format={{.State.Status}}
	I1109 14:15:39.697979  292305 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:15:39.698001  292305 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:15:39.698133  292305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-593530
	I1109 14:15:39.699215  292305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/custom-flannel-593530/id_rsa Username:docker}
	I1109 14:15:39.722993  292305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/custom-flannel-593530/id_rsa Username:docker}
	I1109 14:15:39.749350  292305 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 14:15:39.809675  292305 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:15:39.819927  292305 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:15:39.857165  292305 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:15:39.994216  292305 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1109 14:15:39.995748  292305 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-593530" to be "Ready" ...
	I1109 14:15:40.350538  292305 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1109 14:15:37.346530  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:37.346571  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:37.346582  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Pending / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:37.346594  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:37.346619  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:37.346630  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:37.346636  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running
	I1109 14:15:37.346653  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:37.346658  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:37.346663  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:37.346688  285057 retry.go:31] will retry after 1.732906923s: missing components: kube-dns
	I1109 14:15:39.084388  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:39.084425  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:39.084433  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:39.084442  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:39.084447  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:39.084452  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:39.084455  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running
	I1109 14:15:39.084460  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:39.084465  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:39.084470  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:39.084486  285057 retry.go:31] will retry after 1.849866542s: missing components: kube-dns
	I1109 14:15:40.938306  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:40.938336  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:40.938343  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:40.938350  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:40.938354  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:40.938358  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:40.938361  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running
	I1109 14:15:40.938365  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:40.938370  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:40.938373  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:40.938385  285057 retry.go:31] will retry after 3.175085388s: missing components: kube-dns
	W1109 14:15:37.551964  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:40.053058  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	W1109 14:15:38.293327  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	W1109 14:15:40.791137  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	I1109 14:15:40.352482  292305 addons.go:515] duration metric: took 708.767967ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1109 14:15:40.500005  292305 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-593530" context rescaled to 1 replicas
	W1109 14:15:41.999207  292305 node_ready.go:57] node "custom-flannel-593530" has "Ready":"False" status (will retry)
	I1109 14:15:43.498508  292305 node_ready.go:49] node "custom-flannel-593530" is "Ready"
	I1109 14:15:43.498532  292305 node_ready.go:38] duration metric: took 3.502752606s for node "custom-flannel-593530" to be "Ready" ...
	I1109 14:15:43.498546  292305 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:15:43.498592  292305 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:15:43.510170  292305 api_server.go:72] duration metric: took 3.866491794s to wait for apiserver process to appear ...
	I1109 14:15:43.510192  292305 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:15:43.510207  292305 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1109 14:15:43.514730  292305 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1109 14:15:43.515441  292305 api_server.go:141] control plane version: v1.34.1
	I1109 14:15:43.515462  292305 api_server.go:131] duration metric: took 5.265462ms to wait for apiserver health ...
	I1109 14:15:43.515470  292305 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:15:43.518520  292305 system_pods.go:59] 7 kube-system pods found
	I1109 14:15:43.518552  292305 system_pods.go:61] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:43.518561  292305 system_pods.go:61] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:43.518569  292305 system_pods.go:61] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:15:43.518574  292305 system_pods.go:61] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:43.518578  292305 system_pods.go:61] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:43.518583  292305 system_pods.go:61] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:15:43.518588  292305 system_pods.go:61] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:43.518593  292305 system_pods.go:74] duration metric: took 3.118435ms to wait for pod list to return data ...
	I1109 14:15:43.518599  292305 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:15:43.520498  292305 default_sa.go:45] found service account: "default"
	I1109 14:15:43.520515  292305 default_sa.go:55] duration metric: took 1.910237ms for default service account to be created ...
	I1109 14:15:43.520524  292305 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:15:43.522862  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:43.522884  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:43.522891  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:43.522901  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:15:43.522907  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:43.522912  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:43.522929  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:15:43.522934  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:43.522951  292305 retry.go:31] will retry after 271.966763ms: missing components: kube-dns
	I1109 14:15:43.798286  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:43.798325  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:43.798334  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:43.798345  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:15:43.798351  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:43.798359  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:43.798366  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:15:43.798372  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:43.798396  292305 retry.go:31] will retry after 248.517234ms: missing components: kube-dns
	I1109 14:15:44.051393  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:44.051428  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:44.051434  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:44.051441  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:15:44.051446  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:44.051452  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:44.051458  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:15:44.051465  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:44.051485  292305 retry.go:31] will retry after 307.177206ms: missing components: kube-dns
	W1109 14:15:42.055199  287405 pod_ready.go:104] pod "coredns-66bc5c9577-z8lkx" is not "Ready", error: <nil>
	I1109 14:15:44.053379  287405 pod_ready.go:94] pod "coredns-66bc5c9577-z8lkx" is "Ready"
	I1109 14:15:44.053403  287405 pod_ready.go:86] duration metric: took 31.506392424s for pod "coredns-66bc5c9577-z8lkx" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:44.055835  287405 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:44.059556  287405 pod_ready.go:94] pod "etcd-default-k8s-diff-port-326524" is "Ready"
	I1109 14:15:44.059581  287405 pod_ready.go:86] duration metric: took 3.725825ms for pod "etcd-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:44.061759  287405 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:44.065473  287405 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-326524" is "Ready"
	I1109 14:15:44.065494  287405 pod_ready.go:86] duration metric: took 3.713918ms for pod "kube-apiserver-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:44.067343  287405 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:44.250877  287405 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-326524" is "Ready"
	I1109 14:15:44.250902  287405 pod_ready.go:86] duration metric: took 183.538136ms for pod "kube-controller-manager-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:44.450973  287405 pod_ready.go:83] waiting for pod "kube-proxy-n95wb" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:44.851555  287405 pod_ready.go:94] pod "kube-proxy-n95wb" is "Ready"
	I1109 14:15:44.851581  287405 pod_ready.go:86] duration metric: took 400.585297ms for pod "kube-proxy-n95wb" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:45.050455  287405 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:45.451204  287405 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-326524" is "Ready"
	I1109 14:15:45.451228  287405 pod_ready.go:86] duration metric: took 400.750017ms for pod "kube-scheduler-default-k8s-diff-port-326524" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:45.451239  287405 pod_ready.go:40] duration metric: took 32.907381754s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:15:45.496002  287405 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:15:45.497770  287405 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-326524" cluster and "default" namespace by default
	I1109 14:15:44.119430  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:44.119460  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:44.119469  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:44.119477  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:44.119481  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:44.119485  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:44.119488  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running
	I1109 14:15:44.119492  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:44.119495  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:44.119498  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:44.119510  285057 retry.go:31] will retry after 4.333587155s: missing components: kube-dns
	W1109 14:15:43.290975  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	W1109 14:15:45.291386  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	I1109 14:15:44.363012  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:44.363047  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:44.363053  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:44.363058  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running
	I1109 14:15:44.363063  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:44.363066  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:44.363078  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:15:44.363082  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Running
	I1109 14:15:44.363102  292305 retry.go:31] will retry after 593.567401ms: missing components: kube-dns
	I1109 14:15:44.960309  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:44.960362  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:44.960372  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:44.960385  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running
	I1109 14:15:44.960397  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:44.960402  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:44.960408  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running
	I1109 14:15:44.960415  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Running
	I1109 14:15:44.960433  292305 retry.go:31] will retry after 649.59511ms: missing components: kube-dns
	I1109 14:15:45.614668  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:45.614707  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:45.614716  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:45.614724  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running
	I1109 14:15:45.614730  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:45.614735  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:45.614746  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running
	I1109 14:15:45.614751  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Running
	I1109 14:15:45.614772  292305 retry.go:31] will retry after 928.305564ms: missing components: kube-dns
	I1109 14:15:46.547048  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:46.547085  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:46.547094  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:46.547102  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running
	I1109 14:15:46.547108  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:46.547113  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:46.547118  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running
	I1109 14:15:46.547123  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Running
	I1109 14:15:46.547142  292305 retry.go:31] will retry after 1.104834349s: missing components: kube-dns
	I1109 14:15:47.657070  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:47.657132  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:47.657140  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:47.657160  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running
	I1109 14:15:47.657176  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:47.657181  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:47.657186  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running
	I1109 14:15:47.657195  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Running
	I1109 14:15:47.657214  292305 retry.go:31] will retry after 1.315228447s: missing components: kube-dns
	I1109 14:15:48.976003  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:48.976050  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:48.976059  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:48.976067  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running
	I1109 14:15:48.976074  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:48.976078  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:48.976082  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running
	I1109 14:15:48.976087  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Running
	I1109 14:15:48.976106  292305 retry.go:31] will retry after 1.836787676s: missing components: kube-dns
	I1109 14:15:48.457511  285057 system_pods.go:86] 9 kube-system pods found
	I1109 14:15:48.457542  285057 system_pods.go:89] "calico-kube-controllers-5766bdd7c-9nbm6" [76e45476-6f51-4c06-b7a5-1cd089ccbb37] Running / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1109 14:15:48.457550  285057 system_pods.go:89] "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1109 14:15:48.457555  285057 system_pods.go:89] "coredns-66bc5c9577-ng52f" [f7a2da56-bce4-41d3-90aa-be666d9383dd] Running
	I1109 14:15:48.457559  285057 system_pods.go:89] "etcd-calico-593530" [e0ab9df6-3aed-4467-8ea2-64ca4a7a8c22] Running
	I1109 14:15:48.457562  285057 system_pods.go:89] "kube-apiserver-calico-593530" [4ca3f0e1-3968-4b87-9fcd-34b5ad73eef9] Running
	I1109 14:15:48.457567  285057 system_pods.go:89] "kube-controller-manager-calico-593530" [a4787f05-2c58-4d7b-9b22-ee3916ceedd1] Running
	I1109 14:15:48.457570  285057 system_pods.go:89] "kube-proxy-bvdm9" [91430046-f347-4991-9ed6-bc8ae4e0717b] Running
	I1109 14:15:48.457573  285057 system_pods.go:89] "kube-scheduler-calico-593530" [7bab48c1-67a2-4e79-8ae6-55fd54710901] Running
	I1109 14:15:48.457576  285057 system_pods.go:89] "storage-provisioner" [cfe72025-96b5-45d8-8419-20a6ff3a780d] Running
	I1109 14:15:48.457585  285057 system_pods.go:126] duration metric: took 17.677552743s to wait for k8s-apps to be running ...
	I1109 14:15:48.457594  285057 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:15:48.457631  285057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:15:48.470390  285057 system_svc.go:56] duration metric: took 12.787082ms WaitForService to wait for kubelet
	I1109 14:15:48.470423  285057 kubeadm.go:587] duration metric: took 22.728603366s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:15:48.470440  285057 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:15:48.473267  285057 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:15:48.473290  285057 node_conditions.go:123] node cpu capacity is 8
	I1109 14:15:48.473302  285057 node_conditions.go:105] duration metric: took 2.858053ms to run NodePressure ...
	I1109 14:15:48.473315  285057 start.go:242] waiting for startup goroutines ...
	I1109 14:15:48.473324  285057 start.go:247] waiting for cluster config update ...
	I1109 14:15:48.473344  285057 start.go:256] writing updated cluster config ...
	I1109 14:15:48.473612  285057 ssh_runner.go:195] Run: rm -f paused
	I1109 14:15:48.477201  285057 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:15:48.480230  285057 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ng52f" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:48.484134  285057 pod_ready.go:94] pod "coredns-66bc5c9577-ng52f" is "Ready"
	I1109 14:15:48.484158  285057 pod_ready.go:86] duration metric: took 3.908127ms for pod "coredns-66bc5c9577-ng52f" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:48.486134  285057 pod_ready.go:83] waiting for pod "etcd-calico-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:48.489524  285057 pod_ready.go:94] pod "etcd-calico-593530" is "Ready"
	I1109 14:15:48.489546  285057 pod_ready.go:86] duration metric: took 3.392491ms for pod "etcd-calico-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:48.493995  285057 pod_ready.go:83] waiting for pod "kube-apiserver-calico-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:48.497494  285057 pod_ready.go:94] pod "kube-apiserver-calico-593530" is "Ready"
	I1109 14:15:48.497514  285057 pod_ready.go:86] duration metric: took 3.498625ms for pod "kube-apiserver-calico-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:48.499393  285057 pod_ready.go:83] waiting for pod "kube-controller-manager-calico-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:48.880835  285057 pod_ready.go:94] pod "kube-controller-manager-calico-593530" is "Ready"
	I1109 14:15:48.880866  285057 pod_ready.go:86] duration metric: took 381.451946ms for pod "kube-controller-manager-calico-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:49.081749  285057 pod_ready.go:83] waiting for pod "kube-proxy-bvdm9" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:49.481377  285057 pod_ready.go:94] pod "kube-proxy-bvdm9" is "Ready"
	I1109 14:15:49.481404  285057 pod_ready.go:86] duration metric: took 399.632087ms for pod "kube-proxy-bvdm9" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:49.681423  285057 pod_ready.go:83] waiting for pod "kube-scheduler-calico-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:50.081224  285057 pod_ready.go:94] pod "kube-scheduler-calico-593530" is "Ready"
	I1109 14:15:50.081246  285057 pod_ready.go:86] duration metric: took 399.800182ms for pod "kube-scheduler-calico-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:50.081256  285057 pod_ready.go:40] duration metric: took 1.604028627s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:15:50.123455  285057 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:15:50.125307  285057 out.go:179] * Done! kubectl is now configured to use "calico-593530" cluster and "default" namespace by default
	W1109 14:15:47.790932  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	W1109 14:15:50.290878  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	I1109 14:15:50.816767  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:50.816806  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:50.816812  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:50.816818  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running
	I1109 14:15:50.816823  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:50.816827  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:50.816830  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running
	I1109 14:15:50.816833  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Running
	I1109 14:15:50.816848  292305 retry.go:31] will retry after 2.233599429s: missing components: kube-dns
	I1109 14:15:53.054548  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:53.054579  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:53.054586  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:53.054592  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running
	I1109 14:15:53.054596  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:53.054599  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:53.054603  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running
	I1109 14:15:53.054606  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Running
	I1109 14:15:53.054618  292305 retry.go:31] will retry after 2.802341321s: missing components: kube-dns
	W1109 14:15:52.790292  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	W1109 14:15:54.790546  280419 node_ready.go:57] node "kindnet-593530" has "Ready":"False" status (will retry)
	I1109 14:15:56.791262  280419 node_ready.go:49] node "kindnet-593530" is "Ready"
	I1109 14:15:56.791290  280419 node_ready.go:38] duration metric: took 41.003566488s for node "kindnet-593530" to be "Ready" ...
	I1109 14:15:56.791305  280419 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:15:56.791348  280419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:15:56.803143  280419 api_server.go:72] duration metric: took 41.49881417s to wait for apiserver process to appear ...
	I1109 14:15:56.803161  280419 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:15:56.803180  280419 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:15:56.807244  280419 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1109 14:15:56.808161  280419 api_server.go:141] control plane version: v1.34.1
	I1109 14:15:56.808186  280419 api_server.go:131] duration metric: took 5.018019ms to wait for apiserver health ...
	I1109 14:15:56.808196  280419 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:15:56.810997  280419 system_pods.go:59] 8 kube-system pods found
	I1109 14:15:56.811028  280419 system_pods.go:61] "coredns-66bc5c9577-czn4q" [83857dcf-3d3e-4627-98d4-cb3890025b41] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:56.811036  280419 system_pods.go:61] "etcd-kindnet-593530" [d7def6fa-d870-489f-8cde-c91579e96fbc] Running
	I1109 14:15:56.811042  280419 system_pods.go:61] "kindnet-tcnpd" [074c5b01-c76a-4b19-9daf-0297d84e3ddf] Running
	I1109 14:15:56.811047  280419 system_pods.go:61] "kube-apiserver-kindnet-593530" [f351e2ed-6067-4770-a3bf-11f9b08be886] Running
	I1109 14:15:56.811052  280419 system_pods.go:61] "kube-controller-manager-kindnet-593530" [d9d04309-0e1b-43cf-a100-27dd3355a0d1] Running
	I1109 14:15:56.811057  280419 system_pods.go:61] "kube-proxy-2b82p" [ca422cca-ab6a-4ba6-ad64-d930c4d0bc23] Running
	I1109 14:15:56.811063  280419 system_pods.go:61] "kube-scheduler-kindnet-593530" [ac5aa5cc-418c-4036-84f1-a539578a9c6d] Running
	I1109 14:15:56.811070  280419 system_pods.go:61] "storage-provisioner" [4ea696ff-1c98-4a58-b242-3ee802da17bb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:56.811081  280419 system_pods.go:74] duration metric: took 2.878198ms to wait for pod list to return data ...
	I1109 14:15:56.811095  280419 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:15:56.813215  280419 default_sa.go:45] found service account: "default"
	I1109 14:15:56.813235  280419 default_sa.go:55] duration metric: took 2.133116ms for default service account to be created ...
	I1109 14:15:56.813243  280419 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:15:56.817222  280419 system_pods.go:86] 8 kube-system pods found
	I1109 14:15:56.817351  280419 system_pods.go:89] "coredns-66bc5c9577-czn4q" [83857dcf-3d3e-4627-98d4-cb3890025b41] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:56.817362  280419 system_pods.go:89] "etcd-kindnet-593530" [d7def6fa-d870-489f-8cde-c91579e96fbc] Running
	I1109 14:15:56.817374  280419 system_pods.go:89] "kindnet-tcnpd" [074c5b01-c76a-4b19-9daf-0297d84e3ddf] Running
	I1109 14:15:56.817387  280419 system_pods.go:89] "kube-apiserver-kindnet-593530" [f351e2ed-6067-4770-a3bf-11f9b08be886] Running
	I1109 14:15:56.817424  280419 system_pods.go:89] "kube-controller-manager-kindnet-593530" [d9d04309-0e1b-43cf-a100-27dd3355a0d1] Running
	I1109 14:15:56.817472  280419 system_pods.go:89] "kube-proxy-2b82p" [ca422cca-ab6a-4ba6-ad64-d930c4d0bc23] Running
	I1109 14:15:56.817839  280419 system_pods.go:89] "kube-scheduler-kindnet-593530" [ac5aa5cc-418c-4036-84f1-a539578a9c6d] Running
	I1109 14:15:56.817855  280419 system_pods.go:89] "storage-provisioner" [4ea696ff-1c98-4a58-b242-3ee802da17bb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:56.817876  280419 retry.go:31] will retry after 207.927622ms: missing components: kube-dns
	I1109 14:15:57.029609  280419 system_pods.go:86] 8 kube-system pods found
	I1109 14:15:57.029671  280419 system_pods.go:89] "coredns-66bc5c9577-czn4q" [83857dcf-3d3e-4627-98d4-cb3890025b41] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:57.029683  280419 system_pods.go:89] "etcd-kindnet-593530" [d7def6fa-d870-489f-8cde-c91579e96fbc] Running
	I1109 14:15:57.029691  280419 system_pods.go:89] "kindnet-tcnpd" [074c5b01-c76a-4b19-9daf-0297d84e3ddf] Running
	I1109 14:15:57.029698  280419 system_pods.go:89] "kube-apiserver-kindnet-593530" [f351e2ed-6067-4770-a3bf-11f9b08be886] Running
	I1109 14:15:57.029704  280419 system_pods.go:89] "kube-controller-manager-kindnet-593530" [d9d04309-0e1b-43cf-a100-27dd3355a0d1] Running
	I1109 14:15:57.029711  280419 system_pods.go:89] "kube-proxy-2b82p" [ca422cca-ab6a-4ba6-ad64-d930c4d0bc23] Running
	I1109 14:15:57.029723  280419 system_pods.go:89] "kube-scheduler-kindnet-593530" [ac5aa5cc-418c-4036-84f1-a539578a9c6d] Running
	I1109 14:15:57.029730  280419 system_pods.go:89] "storage-provisioner" [4ea696ff-1c98-4a58-b242-3ee802da17bb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:15:57.029751  280419 retry.go:31] will retry after 296.848925ms: missing components: kube-dns
	I1109 14:15:57.334256  280419 system_pods.go:86] 8 kube-system pods found
	I1109 14:15:57.334288  280419 system_pods.go:89] "coredns-66bc5c9577-czn4q" [83857dcf-3d3e-4627-98d4-cb3890025b41] Running
	I1109 14:15:57.334295  280419 system_pods.go:89] "etcd-kindnet-593530" [d7def6fa-d870-489f-8cde-c91579e96fbc] Running
	I1109 14:15:57.334301  280419 system_pods.go:89] "kindnet-tcnpd" [074c5b01-c76a-4b19-9daf-0297d84e3ddf] Running
	I1109 14:15:57.334306  280419 system_pods.go:89] "kube-apiserver-kindnet-593530" [f351e2ed-6067-4770-a3bf-11f9b08be886] Running
	I1109 14:15:57.334311  280419 system_pods.go:89] "kube-controller-manager-kindnet-593530" [d9d04309-0e1b-43cf-a100-27dd3355a0d1] Running
	I1109 14:15:57.334317  280419 system_pods.go:89] "kube-proxy-2b82p" [ca422cca-ab6a-4ba6-ad64-d930c4d0bc23] Running
	I1109 14:15:57.334322  280419 system_pods.go:89] "kube-scheduler-kindnet-593530" [ac5aa5cc-418c-4036-84f1-a539578a9c6d] Running
	I1109 14:15:57.334330  280419 system_pods.go:89] "storage-provisioner" [4ea696ff-1c98-4a58-b242-3ee802da17bb] Running
	I1109 14:15:57.334340  280419 system_pods.go:126] duration metric: took 521.090338ms to wait for k8s-apps to be running ...
	I1109 14:15:57.334350  280419 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:15:57.334397  280419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:15:57.357449  280419 system_svc.go:56] duration metric: took 23.091164ms WaitForService to wait for kubelet
	I1109 14:15:57.357492  280419 kubeadm.go:587] duration metric: took 42.053154091s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:15:57.357516  280419 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:15:57.361891  280419 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1109 14:15:57.361917  280419 node_conditions.go:123] node cpu capacity is 8
	I1109 14:15:57.361929  280419 node_conditions.go:105] duration metric: took 4.407954ms to run NodePressure ...
	I1109 14:15:57.361943  280419 start.go:242] waiting for startup goroutines ...
	I1109 14:15:57.361953  280419 start.go:247] waiting for cluster config update ...
	I1109 14:15:57.361973  280419 start.go:256] writing updated cluster config ...
	I1109 14:15:57.362873  280419 ssh_runner.go:195] Run: rm -f paused
	I1109 14:15:57.368715  280419 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:15:57.374634  280419 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-czn4q" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:57.381365  280419 pod_ready.go:94] pod "coredns-66bc5c9577-czn4q" is "Ready"
	I1109 14:15:57.381397  280419 pod_ready.go:86] duration metric: took 6.715957ms for pod "coredns-66bc5c9577-czn4q" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:57.384316  280419 pod_ready.go:83] waiting for pod "etcd-kindnet-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:57.389676  280419 pod_ready.go:94] pod "etcd-kindnet-593530" is "Ready"
	I1109 14:15:57.389696  280419 pod_ready.go:86] duration metric: took 5.351325ms for pod "etcd-kindnet-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:57.392377  280419 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:57.397107  280419 pod_ready.go:94] pod "kube-apiserver-kindnet-593530" is "Ready"
	I1109 14:15:57.397137  280419 pod_ready.go:86] duration metric: took 4.735759ms for pod "kube-apiserver-kindnet-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:57.400542  280419 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:57.773686  280419 pod_ready.go:94] pod "kube-controller-manager-kindnet-593530" is "Ready"
	I1109 14:15:57.773711  280419 pod_ready.go:86] duration metric: took 373.147745ms for pod "kube-controller-manager-kindnet-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:57.974561  280419 pod_ready.go:83] waiting for pod "kube-proxy-2b82p" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:58.374029  280419 pod_ready.go:94] pod "kube-proxy-2b82p" is "Ready"
	I1109 14:15:58.374055  280419 pod_ready.go:86] duration metric: took 399.468896ms for pod "kube-proxy-2b82p" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:58.575696  280419 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:58.973221  280419 pod_ready.go:94] pod "kube-scheduler-kindnet-593530" is "Ready"
	I1109 14:15:58.973244  280419 pod_ready.go:86] duration metric: took 397.514286ms for pod "kube-scheduler-kindnet-593530" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:15:58.973255  280419 pod_ready.go:40] duration metric: took 1.604507623s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:15:59.014920  280419 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 14:15:59.016450  280419 out.go:179] * Done! kubectl is now configured to use "kindnet-593530" cluster and "default" namespace by default
	I1109 14:15:55.860931  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:55.860967  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:55.860974  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:55.860982  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running
	I1109 14:15:55.860988  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:55.860993  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:55.860999  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running
	I1109 14:15:55.861005  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Running
	I1109 14:15:55.861026  292305 retry.go:31] will retry after 2.903100187s: missing components: kube-dns
	I1109 14:15:58.769758  292305 system_pods.go:86] 7 kube-system pods found
	I1109 14:15:58.769787  292305 system_pods.go:89] "coredns-66bc5c9577-p9v89" [fbb9f2c1-f770-4ffa-91f7-7a75b17d15a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:15:58.769794  292305 system_pods.go:89] "etcd-custom-flannel-593530" [1ef16500-20d7-457d-a4bd-87936ecffd96] Running
	I1109 14:15:58.769800  292305 system_pods.go:89] "kube-apiserver-custom-flannel-593530" [4011c21c-2073-43a3-b1fe-3a46c38774e5] Running
	I1109 14:15:58.769805  292305 system_pods.go:89] "kube-controller-manager-custom-flannel-593530" [7a15131d-96b7-4624-9c3b-ed1ea087d7fb] Running
	I1109 14:15:58.769808  292305 system_pods.go:89] "kube-proxy-jvcq2" [88c28436-a224-4e1f-8c56-2b8c78825de8] Running
	I1109 14:15:58.769813  292305 system_pods.go:89] "kube-scheduler-custom-flannel-593530" [a43e517f-4eb9-4329-8fe1-0359fe0e37de] Running
	I1109 14:15:58.769819  292305 system_pods.go:89] "storage-provisioner" [01481056-bb75-4e6c-987e-505de899216b] Running
	I1109 14:15:58.769832  292305 retry.go:31] will retry after 3.837368865s: missing components: kube-dns
	
	
	==> CRI-O <==
	Nov 09 14:15:42 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:42.625773445Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5fecf764-b808-4a25-b719-242f2be036bc name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:15:42 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:42.626835032Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=39db12f4-0596-4c7b-ba79-7ef09cf8d014 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:15:42 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:42.62696559Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:15:42 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:42.631761555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:15:42 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:42.631894367Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6f3736bb7935b25570549f0c390a434cb8263e066f0534f046b20d61a0f1ee4f/merged/etc/passwd: no such file or directory"
	Nov 09 14:15:42 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:42.63192655Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6f3736bb7935b25570549f0c390a434cb8263e066f0534f046b20d61a0f1ee4f/merged/etc/group: no such file or directory"
	Nov 09 14:15:42 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:42.632333896Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:15:42 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:42.662655124Z" level=info msg="Created container 78756eafc8cc650dd8346a21fa319a4f4fd39031ed235b5ff8da5979c38a8ba6: kube-system/storage-provisioner/storage-provisioner" id=39db12f4-0596-4c7b-ba79-7ef09cf8d014 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:15:42 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:42.663216012Z" level=info msg="Starting container: 78756eafc8cc650dd8346a21fa319a4f4fd39031ed235b5ff8da5979c38a8ba6" id=c1e6a567-a7fe-4c74-bf24-d371186fb347 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:15:42 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:42.664954551Z" level=info msg="Started container" PID=1696 containerID=78756eafc8cc650dd8346a21fa319a4f4fd39031ed235b5ff8da5979c38a8ba6 description=kube-system/storage-provisioner/storage-provisioner id=c1e6a567-a7fe-4c74-bf24-d371186fb347 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c32f2df08eb386632b5c5c2b6c7c15e16709f4bf29049d4c9e2fbbbc6ad9051f
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.235570896Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.239710291Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.239734991Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.239750728Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.243250227Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.24327454Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.243292707Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.246829199Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.246853891Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.246872726Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.250251787Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.250274912Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.250296909Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.253540853Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:15:52 default-k8s-diff-port-326524 crio[561]: time="2025-11-09T14:15:52.253559463Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	78756eafc8cc6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   c32f2df08eb38       storage-provisioner                                    kube-system
	f64494efa5bee       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   0c42e1d026796       dashboard-metrics-scraper-6ffb444bf9-jzz6r             kubernetes-dashboard
	86d787fc7b9fc       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   269fc0ffbc5ee       kubernetes-dashboard-855c9754f9-cfzqd                  kubernetes-dashboard
	db196ab0b527e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   2a03da47f8c2c       coredns-66bc5c9577-z8lkx                               kube-system
	2aee0c1de134e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   b799dc9d1968e       busybox                                                default
	fc06f175e4a8d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   bf3ef13feda79       kube-proxy-n95wb                                       kube-system
	ebf68a39b2ef3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   08e623a4370fc       kindnet-fdxsl                                          kube-system
	4b5a253a8c077       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   c32f2df08eb38       storage-provisioner                                    kube-system
	7ab8f2cac821a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   82fe3eb826f2b       kube-scheduler-default-k8s-diff-port-326524            kube-system
	fbe03639cf3cf       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   25f8340ee5b6d       kube-apiserver-default-k8s-diff-port-326524            kube-system
	5c183e798015e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   7605e01263310       etcd-default-k8s-diff-port-326524                      kube-system
	837343655ca08       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   fd96abbe6f29b       kube-controller-manager-default-k8s-diff-port-326524   kube-system
	
	
	==> coredns [db196ab0b527eabaa5ca6448d00c0929a6ddeb5c052739081cb73ceb539b821d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55685 - 43274 "HINFO IN 7174974885144464276.1359323117684649961. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.088692886s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-326524
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-326524
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=default-k8s-diff-port-326524
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_13_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:13:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-326524
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:15:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:15:52 +0000   Sun, 09 Nov 2025 14:13:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:15:52 +0000   Sun, 09 Nov 2025 14:13:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:15:52 +0000   Sun, 09 Nov 2025 14:13:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:15:52 +0000   Sun, 09 Nov 2025 14:14:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-326524
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                d901abab-4a5c-4bab-8d2e-5eebe721a5ed
	  Boot ID:                    f61b57f8-a461-4dd6-b921-37a11bd88f0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-z8lkx                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m16s
	  kube-system                 etcd-default-k8s-diff-port-326524                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m22s
	  kube-system                 kindnet-fdxsl                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m17s
	  kube-system                 kube-apiserver-default-k8s-diff-port-326524             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-326524    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-proxy-n95wb                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-scheduler-default-k8s-diff-port-326524             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-jzz6r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-cfzqd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m15s                  kube-proxy       
	  Normal  Starting                 50s                    kube-proxy       
	  Normal  Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m27s (x8 over 2m27s)  kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m27s (x8 over 2m27s)  kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m27s (x8 over 2m27s)  kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m22s                  kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m22s                  kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m22s                  kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m22s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m18s                  node-controller  Node default-k8s-diff-port-326524 event: Registered Node default-k8s-diff-port-326524 in Controller
	  Normal  NodeReady                95s                    kubelet          Node default-k8s-diff-port-326524 status is now: NodeReady
	  Normal  Starting                 54s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)      kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)      kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)      kubelet          Node default-k8s-diff-port-326524 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                    node-controller  Node default-k8s-diff-port-326524 event: Registered Node default-k8s-diff-port-326524 in Controller
	
	
	==> dmesg <==
	[  +6.571631] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 9 13:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.019189] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023897] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023889] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023875] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +1.023870] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +2.047759] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +4.031604] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[  +8.447123] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[ +16.382306] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 13:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 32 af 8c d0 bf ae b6 a5 14 28 bd aa 08 00
	[Nov 9 14:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 4f e9 4b 2a 15 08 06
	
	
	==> etcd [5c183e798015e8d24e58b5d9a3615af03e163ce2046d50f89cd8270f5eed3b9f] <==
	{"level":"warn","ts":"2025-11-09T14:15:10.642072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.649946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.657601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.664976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.672367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.679399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.686861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.693895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.706919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.714508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.722040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:15:10.783595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45682","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T14:15:14.039246Z","caller":"traceutil/trace.go:172","msg":"trace[606128908] linearizableReadLoop","detail":"{readStateIndex:609; appliedIndex:609; }","duration":"112.114153ms","start":"2025-11-09T14:15:13.927109Z","end":"2025-11-09T14:15:14.039224Z","steps":["trace[606128908] 'read index received'  (duration: 112.107953ms)","trace[606128908] 'applied index is now lower than readState.Index'  (duration: 5.403µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:15:14.179864Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"252.730761ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/validatingadmissionpolicy-status-controller\" limit:1 ","response":"range_response_count:1 size:252"}
	{"level":"info","ts":"2025-11-09T14:15:14.179959Z","caller":"traceutil/trace.go:172","msg":"trace[834412207] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/validatingadmissionpolicy-status-controller; range_end:; response_count:1; response_revision:574; }","duration":"252.838202ms","start":"2025-11-09T14:15:13.927101Z","end":"2025-11-09T14:15:14.179939Z","steps":["trace[834412207] 'agreement among raft nodes before linearized reading'  (duration: 112.211975ms)","trace[834412207] 'range keys from in-memory index tree'  (duration: 140.427715ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:15:14.180410Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.619292ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596946199590411 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577-z8lkx.18765c3b487f24ee\" mod_revision:574 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577-z8lkx.18765c3b487f24ee\" value_size:714 lease:499224909344814214 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-66bc5c9577-z8lkx.18765c3b487f24ee\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-09T14:15:14.180519Z","caller":"traceutil/trace.go:172","msg":"trace[2023832512] linearizableReadLoop","detail":"{readStateIndex:610; appliedIndex:609; }","duration":"131.351523ms","start":"2025-11-09T14:15:14.049153Z","end":"2025-11-09T14:15:14.180505Z","steps":["trace[2023832512] 'read index received'  (duration: 30.251µs)","trace[2023832512] 'applied index is now lower than readState.Index'  (duration: 131.319864ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:15:14.180671Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.506651ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-z8lkx\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-11-09T14:15:14.180703Z","caller":"traceutil/trace.go:172","msg":"trace[1938066004] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-z8lkx; range_end:; response_count:1; response_revision:575; }","duration":"131.545699ms","start":"2025-11-09T14:15:14.049148Z","end":"2025-11-09T14:15:14.180694Z","steps":["trace[1938066004] 'agreement among raft nodes before linearized reading'  (duration: 131.402245ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:15:14.180876Z","caller":"traceutil/trace.go:172","msg":"trace[1216754960] transaction","detail":"{read_only:false; response_revision:575; number_of_response:1; }","duration":"265.206218ms","start":"2025-11-09T14:15:13.915657Z","end":"2025-11-09T14:15:14.180864Z","steps":["trace[1216754960] 'process raft request'  (duration: 123.591228ms)","trace[1216754960] 'compare'  (duration: 140.539821ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-09T14:15:15.152764Z","caller":"traceutil/trace.go:172","msg":"trace[1821183373] linearizableReadLoop","detail":"{readStateIndex:612; appliedIndex:612; }","duration":"104.65581ms","start":"2025-11-09T14:15:15.048085Z","end":"2025-11-09T14:15:15.152741Z","steps":["trace[1821183373] 'read index received'  (duration: 104.635619ms)","trace[1821183373] 'applied index is now lower than readState.Index'  (duration: 19.405µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:15:15.152935Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.829539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-z8lkx\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-11-09T14:15:15.152935Z","caller":"traceutil/trace.go:172","msg":"trace[1904282794] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"120.456086ms","start":"2025-11-09T14:15:15.032457Z","end":"2025-11-09T14:15:15.152913Z","steps":["trace[1904282794] 'process raft request'  (duration: 120.353297ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:15:15.153333Z","caller":"traceutil/trace.go:172","msg":"trace[71154788] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-z8lkx; range_end:; response_count:1; response_revision:577; }","duration":"104.875305ms","start":"2025-11-09T14:15:15.048077Z","end":"2025-11-09T14:15:15.152953Z","steps":["trace[71154788] 'agreement among raft nodes before linearized reading'  (duration: 104.751804ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:15:15.162866Z","caller":"traceutil/trace.go:172","msg":"trace[900107361] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"130.318311ms","start":"2025-11-09T14:15:15.032527Z","end":"2025-11-09T14:15:15.162845Z","steps":["trace[900107361] 'process raft request'  (duration: 125.669038ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:16:02 up 58 min,  0 user,  load average: 5.66, 4.43, 2.63
	Linux default-k8s-diff-port-326524 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ebf68a39b2ef31de8b38938ff0fda338ca0858e9fd7cc54035465ac606412dc9] <==
	I1109 14:15:12.028890       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:15:12.029142       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1109 14:15:12.029298       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:15:12.029316       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:15:12.029338       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:15:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:15:12.229243       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:15:12.229271       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:15:12.229283       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:15:12.229548       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1109 14:15:42.229965       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1109 14:15:42.229971       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1109 14:15:42.229965       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1109 14:15:42.322582       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1109 14:15:43.529425       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:15:43.529450       1 metrics.go:72] Registering metrics
	I1109 14:15:43.529511       1 controller.go:711] "Syncing nftables rules"
	I1109 14:15:52.235269       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:15:52.235318       1 main.go:301] handling current node
	I1109 14:16:02.236750       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:16:02.236780       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fbe03639cf3cf12216d6d619a7ac216d6482e7d7722f4c3ff0c7021ea98e7f30] <==
	I1109 14:15:11.355833       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 14:15:11.349554       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1109 14:15:11.352743       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1109 14:15:11.368090       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1109 14:15:11.374113       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1109 14:15:11.374202       1 policy_source.go:240] refreshing policies
	I1109 14:15:11.376887       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1109 14:15:11.377058       1 aggregator.go:171] initial CRD sync complete...
	I1109 14:15:11.377070       1 autoregister_controller.go:144] Starting autoregister controller
	I1109 14:15:11.377078       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:15:11.377083       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:15:11.382622       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:15:11.388465       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1109 14:15:11.390554       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:15:11.476011       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:15:11.733195       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:15:11.762514       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:15:11.789729       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:15:11.803886       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:15:11.883088       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.49.13"}
	I1109 14:15:11.899826       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.151.61"}
	I1109 14:15:12.256104       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:15:15.031969       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:15:15.165820       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:15:15.238072       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [837343655ca08ffad86a95fba8e051cfacdce4058b4c6aebfef423a9a95ad170] <==
	I1109 14:15:14.646038       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 14:15:14.678893       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1109 14:15:14.678930       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1109 14:15:14.678942       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 14:15:14.678974       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 14:15:14.679193       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1109 14:15:14.679319       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1109 14:15:14.679331       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1109 14:15:14.679366       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1109 14:15:14.679430       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 14:15:14.679514       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-326524"
	I1109 14:15:14.679550       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1109 14:15:14.679667       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1109 14:15:14.681998       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1109 14:15:14.685218       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 14:15:14.685279       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1109 14:15:14.687672       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 14:15:14.695924       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:15:14.695945       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:15:14.705241       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1109 14:15:14.707548       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:15:14.708615       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1109 14:15:14.711932       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:15:14.713054       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 14:15:14.715283       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [fc06f175e4a8df21959410c9b874ceb5942160e55f3c77acdd8326cb0be2a478] <==
	I1109 14:15:11.932621       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:15:12.001861       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:15:12.102412       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:15:12.102440       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1109 14:15:12.102520       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:15:12.122012       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:15:12.122074       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:15:12.127784       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:15:12.128148       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:15:12.128183       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:15:12.134115       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:15:12.134138       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:15:12.134170       1 config.go:200] "Starting service config controller"
	I1109 14:15:12.134175       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:15:12.134192       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:15:12.134202       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:15:12.134328       1 config.go:309] "Starting node config controller"
	I1109 14:15:12.134366       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:15:12.134375       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:15:12.234318       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:15:12.234332       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:15:12.234313       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7ab8f2cac821afdbf2b15cac151b0b2e02e8e8d57071e97a867dc8ec28d4c7f2] <==
	I1109 14:15:09.886025       1 serving.go:386] Generated self-signed cert in-memory
	I1109 14:15:11.343778       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 14:15:11.343808       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:15:11.352040       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:15:11.352189       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:15:11.352725       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:15:11.352157       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1109 14:15:11.352838       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1109 14:15:11.352205       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1109 14:15:11.354573       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1109 14:15:11.352224       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 14:15:11.452992       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1109 14:15:11.453136       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:15:11.455248       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:15:15 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:15.399107     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/abdce049-274b-4d8e-b0bb-1db69a7fd265-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-cfzqd\" (UID: \"abdce049-274b-4d8e-b0bb-1db69a7fd265\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cfzqd"
	Nov 09 14:15:15 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:15.399346     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2vgw\" (UniqueName: \"kubernetes.io/projected/20ebc0a6-2eb1-4988-b1ab-367cac579079-kube-api-access-n2vgw\") pod \"dashboard-metrics-scraper-6ffb444bf9-jzz6r\" (UID: \"20ebc0a6-2eb1-4988-b1ab-367cac579079\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jzz6r"
	Nov 09 14:15:15 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:15.399619     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/20ebc0a6-2eb1-4988-b1ab-367cac579079-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-jzz6r\" (UID: \"20ebc0a6-2eb1-4988-b1ab-367cac579079\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jzz6r"
	Nov 09 14:15:18 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:18.553832     719 scope.go:117] "RemoveContainer" containerID="454cf4ccce17381fca3f7fb640a151ba3cc8a6ca75233f3ad2c9f60b447a34e9"
	Nov 09 14:15:19 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:19.559155     719 scope.go:117] "RemoveContainer" containerID="454cf4ccce17381fca3f7fb640a151ba3cc8a6ca75233f3ad2c9f60b447a34e9"
	Nov 09 14:15:19 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:19.559442     719 scope.go:117] "RemoveContainer" containerID="fc9ed13ab7b43b93301b2d4c22c2d13822d9946bcc344d69be7deb082e9aabfc"
	Nov 09 14:15:19 default-k8s-diff-port-326524 kubelet[719]: E1109 14:15:19.559590     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jzz6r_kubernetes-dashboard(20ebc0a6-2eb1-4988-b1ab-367cac579079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jzz6r" podUID="20ebc0a6-2eb1-4988-b1ab-367cac579079"
	Nov 09 14:15:20 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:20.563411     719 scope.go:117] "RemoveContainer" containerID="fc9ed13ab7b43b93301b2d4c22c2d13822d9946bcc344d69be7deb082e9aabfc"
	Nov 09 14:15:20 default-k8s-diff-port-326524 kubelet[719]: E1109 14:15:20.563626     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jzz6r_kubernetes-dashboard(20ebc0a6-2eb1-4988-b1ab-367cac579079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jzz6r" podUID="20ebc0a6-2eb1-4988-b1ab-367cac579079"
	Nov 09 14:15:22 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:22.096476     719 scope.go:117] "RemoveContainer" containerID="fc9ed13ab7b43b93301b2d4c22c2d13822d9946bcc344d69be7deb082e9aabfc"
	Nov 09 14:15:22 default-k8s-diff-port-326524 kubelet[719]: E1109 14:15:22.096690     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jzz6r_kubernetes-dashboard(20ebc0a6-2eb1-4988-b1ab-367cac579079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jzz6r" podUID="20ebc0a6-2eb1-4988-b1ab-367cac579079"
	Nov 09 14:15:23 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:23.581739     719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cfzqd" podStartSLOduration=1.332306976 podStartE2EDuration="8.581716639s" podCreationTimestamp="2025-11-09 14:15:15 +0000 UTC" firstStartedPulling="2025-11-09 14:15:15.689194853 +0000 UTC m=+7.307046992" lastFinishedPulling="2025-11-09 14:15:22.938604505 +0000 UTC m=+14.556456655" observedRunningTime="2025-11-09 14:15:23.581593208 +0000 UTC m=+15.199445400" watchObservedRunningTime="2025-11-09 14:15:23.581716639 +0000 UTC m=+15.199568796"
	Nov 09 14:15:37 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:37.484504     719 scope.go:117] "RemoveContainer" containerID="fc9ed13ab7b43b93301b2d4c22c2d13822d9946bcc344d69be7deb082e9aabfc"
	Nov 09 14:15:37 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:37.608228     719 scope.go:117] "RemoveContainer" containerID="fc9ed13ab7b43b93301b2d4c22c2d13822d9946bcc344d69be7deb082e9aabfc"
	Nov 09 14:15:37 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:37.608453     719 scope.go:117] "RemoveContainer" containerID="f64494efa5bee134a9a5d0bbcdb057a51a842044c2230e1f2ea2ef9aa0b9654f"
	Nov 09 14:15:37 default-k8s-diff-port-326524 kubelet[719]: E1109 14:15:37.608658     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jzz6r_kubernetes-dashboard(20ebc0a6-2eb1-4988-b1ab-367cac579079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jzz6r" podUID="20ebc0a6-2eb1-4988-b1ab-367cac579079"
	Nov 09 14:15:42 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:42.097462     719 scope.go:117] "RemoveContainer" containerID="f64494efa5bee134a9a5d0bbcdb057a51a842044c2230e1f2ea2ef9aa0b9654f"
	Nov 09 14:15:42 default-k8s-diff-port-326524 kubelet[719]: E1109 14:15:42.097700     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jzz6r_kubernetes-dashboard(20ebc0a6-2eb1-4988-b1ab-367cac579079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jzz6r" podUID="20ebc0a6-2eb1-4988-b1ab-367cac579079"
	Nov 09 14:15:42 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:42.624444     719 scope.go:117] "RemoveContainer" containerID="4b5a253a8c0774e804e83d03fcc6cdd58c8b3baf291ecfb31bc3eb73c12fa77d"
	Nov 09 14:15:55 default-k8s-diff-port-326524 kubelet[719]: I1109 14:15:55.484830     719 scope.go:117] "RemoveContainer" containerID="f64494efa5bee134a9a5d0bbcdb057a51a842044c2230e1f2ea2ef9aa0b9654f"
	Nov 09 14:15:55 default-k8s-diff-port-326524 kubelet[719]: E1109 14:15:55.485074     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jzz6r_kubernetes-dashboard(20ebc0a6-2eb1-4988-b1ab-367cac579079)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jzz6r" podUID="20ebc0a6-2eb1-4988-b1ab-367cac579079"
	Nov 09 14:15:57 default-k8s-diff-port-326524 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:15:57 default-k8s-diff-port-326524 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:15:57 default-k8s-diff-port-326524 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 09 14:15:57 default-k8s-diff-port-326524 systemd[1]: kubelet.service: Consumed 1.499s CPU time.
	
	
	==> kubernetes-dashboard [86d787fc7b9fc4076e72a30dca4ee7586b81d535a1d2635a796c6746370cdcd2] <==
	2025/11/09 14:15:22 Starting overwatch
	2025/11/09 14:15:22 Using namespace: kubernetes-dashboard
	2025/11/09 14:15:22 Using in-cluster config to connect to apiserver
	2025/11/09 14:15:22 Using secret token for csrf signing
	2025/11/09 14:15:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/09 14:15:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/09 14:15:22 Successful initial request to the apiserver, version: v1.34.1
	2025/11/09 14:15:22 Generating JWE encryption key
	2025/11/09 14:15:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/09 14:15:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/09 14:15:23 Initializing JWE encryption key from synchronized object
	2025/11/09 14:15:23 Creating in-cluster Sidecar client
	2025/11/09 14:15:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:15:23 Serving insecurely on HTTP port: 9090
	2025/11/09 14:15:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [4b5a253a8c0774e804e83d03fcc6cdd58c8b3baf291ecfb31bc3eb73c12fa77d] <==
	I1109 14:15:11.898738       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1109 14:15:41.902109       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [78756eafc8cc650dd8346a21fa319a4f4fd39031ed235b5ff8da5979c38a8ba6] <==
	I1109 14:15:42.676256       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:15:42.683226       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:15:42.683261       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 14:15:42.684921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:15:46.139973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:15:50.400735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:15:53.999239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:15:57.053556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:16:00.075809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:16:00.082031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:16:00.082204       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:16:00.082300       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9456f7ff-bf23-4b3e-a78e-e1e46b0b9684", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-326524_daf419db-493f-4dac-a62a-bada0890b589 became leader
	I1109 14:16:00.082364       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-326524_daf419db-493f-4dac-a62a-bada0890b589!
	W1109 14:16:00.084682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:16:00.087854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:16:00.182581       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-326524_daf419db-493f-4dac-a62a-bada0890b589!
	W1109 14:16:02.091706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:16:02.095614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-326524 -n default-k8s-diff-port-326524
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-326524 -n default-k8s-diff-port-326524: exit status 2 (334.849367ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-326524 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.06s)
E1109 14:17:02.666171    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:17:02.672538    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:17:02.683866    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:17:02.705171    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:17:02.746490    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:17:02.829309    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:17:02.990595    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:17:03.312538    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:17:03.954123    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:17:05.235482    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:17:07.797258    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (264/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.09
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 3.86
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.38
21 TestBinaryMirror 0.79
22 TestOffline 57.88
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 122.51
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 7.4
48 TestAddons/StoppedEnableDisable 16.65
49 TestCertOptions 23.66
50 TestCertExpiration 211.42
52 TestForceSystemdFlag 27.22
53 TestForceSystemdEnv 40.27
58 TestErrorSpam/setup 20.09
59 TestErrorSpam/start 0.62
60 TestErrorSpam/status 0.9
61 TestErrorSpam/pause 6.61
62 TestErrorSpam/unpause 5.21
63 TestErrorSpam/stop 2.55
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 37.4
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.78
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.76
75 TestFunctional/serial/CacheCmd/cache/add_local 1.1
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.44
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 47.67
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.1
86 TestFunctional/serial/LogsFileCmd 1.13
87 TestFunctional/serial/InvalidService 3.86
89 TestFunctional/parallel/ConfigCmd 0.38
90 TestFunctional/parallel/DashboardCmd 5.96
91 TestFunctional/parallel/DryRun 0.37
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 1.01
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 21.11
101 TestFunctional/parallel/SSHCmd 0.77
102 TestFunctional/parallel/CpCmd 1.98
103 TestFunctional/parallel/MySQL 15.94
104 TestFunctional/parallel/FileSync 0.3
105 TestFunctional/parallel/CertSync 1.92
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
113 TestFunctional/parallel/License 0.45
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.48
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.3
121 TestFunctional/parallel/ImageCommands/Setup 1.01
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.25
134 TestFunctional/parallel/ImageCommands/ImageRemove 1.23
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
145 TestFunctional/parallel/ProfileCmd/profile_list 0.39
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
147 TestFunctional/parallel/MountCmd/any-port 5.48
148 TestFunctional/parallel/MountCmd/specific-port 1.79
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.5
150 TestFunctional/parallel/ServiceCmd/List 1.68
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.68
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 113.27
163 TestMultiControlPlane/serial/DeployApp 3.36
164 TestMultiControlPlane/serial/PingHostFromPods 0.98
165 TestMultiControlPlane/serial/AddWorkerNode 54.37
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.83
168 TestMultiControlPlane/serial/CopyFile 16.23
169 TestMultiControlPlane/serial/StopSecondaryNode 19.21
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
171 TestMultiControlPlane/serial/RestartSecondaryNode 14.5
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.84
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 108.48
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.44
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
176 TestMultiControlPlane/serial/StopCluster 43.5
177 TestMultiControlPlane/serial/RestartCluster 55.57
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
179 TestMultiControlPlane/serial/AddSecondaryNode 39.17
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.83
185 TestJSONOutput/start/Command 67.82
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 7.97
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.21
210 TestKicCustomNetwork/create_custom_network 28.24
211 TestKicCustomNetwork/use_default_bridge_network 22.68
212 TestKicExistingNetwork 24.2
213 TestKicCustomSubnet 24.58
214 TestKicStaticIP 24.55
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 44.98
219 TestMountStart/serial/StartWithMountFirst 4.79
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 4.89
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.68
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.24
226 TestMountStart/serial/RestartStopped 7.11
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 91.63
231 TestMultiNode/serial/DeployApp2Nodes 3.05
232 TestMultiNode/serial/PingHostFrom2Pods 0.68
233 TestMultiNode/serial/AddNode 53.2
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.62
236 TestMultiNode/serial/CopyFile 9.24
237 TestMultiNode/serial/StopNode 2.17
238 TestMultiNode/serial/StartAfterStop 7.06
239 TestMultiNode/serial/RestartKeepsNodes 57.29
240 TestMultiNode/serial/DeleteNode 4.9
241 TestMultiNode/serial/StopMultiNode 19.37
242 TestMultiNode/serial/RestartMultiNode 41.29
243 TestMultiNode/serial/ValidateNameConflict 23.49
248 TestPreload 82.27
250 TestScheduledStopUnix 95.86
253 TestInsufficientStorage 9.51
254 TestRunningBinaryUpgrade 51.09
256 TestKubernetesUpgrade 311.76
257 TestMissingContainerUpgrade 59.81
265 TestStoppedBinaryUpgrade/Setup 0.45
266 TestStoppedBinaryUpgrade/Upgrade 58.8
267 TestStoppedBinaryUpgrade/MinikubeLogs 0.98
269 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
270 TestNoKubernetes/serial/StartWithK8s 23.14
271 TestNoKubernetes/serial/StartWithStopK8s 7.56
272 TestNoKubernetes/serial/Start 4.13
273 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
275 TestNoKubernetes/serial/ProfileList 16.44
276 TestNoKubernetes/serial/Stop 1.28
277 TestNoKubernetes/serial/StartNoArgs 6.97
279 TestPause/serial/Start 37.32
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
288 TestNetworkPlugins/group/false 3.37
292 TestPause/serial/SecondStartNoReconfiguration 8.45
294 TestStartStop/group/old-k8s-version/serial/FirstStart 48.15
297 TestStartStop/group/no-preload/serial/FirstStart 49.53
298 TestStartStop/group/old-k8s-version/serial/DeployApp 8.23
300 TestStartStop/group/old-k8s-version/serial/Stop 15.98
301 TestStartStop/group/no-preload/serial/DeployApp 8.22
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
304 TestStartStop/group/old-k8s-version/serial/SecondStart 52.21
305 TestStartStop/group/no-preload/serial/Stop 18.8
307 TestStartStop/group/embed-certs/serial/FirstStart 43.56
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
309 TestStartStop/group/no-preload/serial/SecondStart 44.7
311 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 73.29
312 TestStartStop/group/embed-certs/serial/DeployApp 8.24
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
316 TestStartStop/group/embed-certs/serial/Stop 18.58
317 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
322 TestStartStop/group/newest-cni/serial/FirstStart 30.95
323 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
326 TestStartStop/group/embed-certs/serial/SecondStart 47.17
327 TestNetworkPlugins/group/auto/Start 45.04
328 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/Stop 12.55
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
332 TestStartStop/group/newest-cni/serial/SecondStart 10.88
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.24
334 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
340 TestNetworkPlugins/group/auto/KubeletFlags 0.32
341 TestNetworkPlugins/group/auto/NetCatPod 9.19
342 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.07
343 TestStartStop/group/default-k8s-diff-port/serial/Stop 20.39
344 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
346 TestNetworkPlugins/group/kindnet/Start 71.74
347 TestNetworkPlugins/group/auto/DNS 0.14
348 TestNetworkPlugins/group/auto/Localhost 0.12
349 TestNetworkPlugins/group/auto/HairPin 0.1
350 TestNetworkPlugins/group/calico/Start 53.49
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
352 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 44.19
353 TestNetworkPlugins/group/custom-flannel/Start 55.12
354 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
355 TestNetworkPlugins/group/calico/ControllerPod 6
356 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
357 TestNetworkPlugins/group/calico/KubeletFlags 0.28
358 TestNetworkPlugins/group/calico/NetCatPod 9.17
359 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
361 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
362 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
363 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.17
364 TestNetworkPlugins/group/kindnet/KubeletFlags 0.53
365 TestNetworkPlugins/group/kindnet/NetCatPod 10.28
366 TestNetworkPlugins/group/calico/DNS 0.16
367 TestNetworkPlugins/group/calico/Localhost 0.14
368 TestNetworkPlugins/group/calico/HairPin 0.13
369 TestNetworkPlugins/group/enable-default-cni/Start 63.55
370 TestNetworkPlugins/group/custom-flannel/DNS 0.12
371 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
372 TestNetworkPlugins/group/custom-flannel/HairPin 0.09
373 TestNetworkPlugins/group/kindnet/DNS 0.12
374 TestNetworkPlugins/group/kindnet/Localhost 0.11
375 TestNetworkPlugins/group/kindnet/HairPin 0.1
376 TestNetworkPlugins/group/flannel/Start 50.2
377 TestNetworkPlugins/group/bridge/Start 65.61
378 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
379 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.18
380 TestNetworkPlugins/group/flannel/ControllerPod 6
381 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
382 TestNetworkPlugins/group/enable-default-cni/Localhost 0.08
383 TestNetworkPlugins/group/enable-default-cni/HairPin 0.08
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
385 TestNetworkPlugins/group/flannel/NetCatPod 8.16
386 TestNetworkPlugins/group/flannel/DNS 0.11
387 TestNetworkPlugins/group/flannel/Localhost 0.09
388 TestNetworkPlugins/group/flannel/HairPin 0.1
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
390 TestNetworkPlugins/group/bridge/NetCatPod 9.17
391 TestNetworkPlugins/group/bridge/DNS 0.1
392 TestNetworkPlugins/group/bridge/Localhost 0.08
393 TestNetworkPlugins/group/bridge/HairPin 0.08
x
+
TestDownloadOnly/v1.28.0/json-events (4.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-517015 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-517015 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.087867423s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1109 13:28:45.385843    9365 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1109 13:28:45.385920    9365 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-517015
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-517015: exit status 85 (66.916367ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-517015 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-517015 │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:28:41
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:28:41.345304    9377 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:28:41.345513    9377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:28:41.345521    9377 out.go:374] Setting ErrFile to fd 2...
	I1109 13:28:41.345525    9377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:28:41.345702    9377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	W1109 13:28:41.345796    9377 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21139-5854/.minikube/config/config.json: open /home/jenkins/minikube-integration/21139-5854/.minikube/config/config.json: no such file or directory
	I1109 13:28:41.346214    9377 out.go:368] Setting JSON to true
	I1109 13:28:41.347075    9377 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":671,"bootTime":1762694250,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 13:28:41.347151    9377 start.go:143] virtualization: kvm guest
	I1109 13:28:41.349226    9377 out.go:99] [download-only-517015] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 13:28:41.349325    9377 notify.go:221] Checking for updates...
	W1109 13:28:41.349388    9377 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball: no such file or directory
	I1109 13:28:41.350574    9377 out.go:171] MINIKUBE_LOCATION=21139
	I1109 13:28:41.351907    9377 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:28:41.353235    9377 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 13:28:41.354601    9377 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 13:28:41.355838    9377 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1109 13:28:41.358018    9377 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1109 13:28:41.358218    9377 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:28:41.381113    9377 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 13:28:41.381224    9377 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:28:41.771864    9377 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-09 13:28:41.761198632 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 13:28:41.771962    9377 docker.go:319] overlay module found
	I1109 13:28:41.773614    9377 out.go:99] Using the docker driver based on user configuration
	I1109 13:28:41.773675    9377 start.go:309] selected driver: docker
	I1109 13:28:41.773687    9377 start.go:930] validating driver "docker" against <nil>
	I1109 13:28:41.773770    9377 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:28:41.827871    9377 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-09 13:28:41.818772474 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 13:28:41.828062    9377 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 13:28:41.828535    9377 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1109 13:28:41.828729    9377 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1109 13:28:41.830267    9377 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-517015 host does not exist
	  To start a cluster, run: "minikube start -p download-only-517015"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-517015
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-263673 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-263673 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.856459286s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1109 13:28:49.656128    9365 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1109 13:28:49.656162    9365 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-263673
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-263673: exit status 85 (67.063469ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-517015 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-517015 │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │ 09 Nov 25 13:28 UTC │
	│ delete  │ -p download-only-517015                                                                                                                                                   │ download-only-517015 │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │ 09 Nov 25 13:28 UTC │
	│ start   │ -o=json --download-only -p download-only-263673 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-263673 │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:28:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:28:45.847684    9731 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:28:45.847917    9731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:28:45.847925    9731 out.go:374] Setting ErrFile to fd 2...
	I1109 13:28:45.847929    9731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:28:45.848102    9731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:28:45.848487    9731 out.go:368] Setting JSON to true
	I1109 13:28:45.849281    9731 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":676,"bootTime":1762694250,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 13:28:45.849359    9731 start.go:143] virtualization: kvm guest
	I1109 13:28:45.850851    9731 out.go:99] [download-only-263673] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 13:28:45.851016    9731 notify.go:221] Checking for updates...
	I1109 13:28:45.852038    9731 out.go:171] MINIKUBE_LOCATION=21139
	I1109 13:28:45.853188    9731 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:28:45.854195    9731 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 13:28:45.855292    9731 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 13:28:45.856284    9731 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1109 13:28:45.858129    9731 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1109 13:28:45.858346    9731 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:28:45.881225    9731 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 13:28:45.881277    9731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:28:45.938710    9731 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-09 13:28:45.928941538 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 13:28:45.938810    9731 docker.go:319] overlay module found
	I1109 13:28:45.940107    9731 out.go:99] Using the docker driver based on user configuration
	I1109 13:28:45.940150    9731 start.go:309] selected driver: docker
	I1109 13:28:45.940159    9731 start.go:930] validating driver "docker" against <nil>
	I1109 13:28:45.940243    9731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:28:45.991872    9731 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-09 13:28:45.982391293 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 13:28:45.992060    9731 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 13:28:45.992509    9731 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1109 13:28:45.992666    9731 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1109 13:28:45.994162    9731 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-263673 host does not exist
	  To start a cluster, run: "minikube start -p download-only-263673"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-263673
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-824434 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-824434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-824434
--- PASS: TestDownloadOnlyKic (0.38s)

                                                
                                    
x
+
TestBinaryMirror (0.79s)

                                                
                                                
=== RUN   TestBinaryMirror
I1109 13:28:50.709112    9365 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-557048 --alsologtostderr --binary-mirror http://127.0.0.1:42397 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-557048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-557048
--- PASS: TestBinaryMirror (0.79s)

                                                
                                    
x
+
TestOffline (57.88s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-721597 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-721597 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (55.363599903s)
helpers_test.go:175: Cleaning up "offline-crio-721597" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-721597
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-721597: (2.519299081s)
--- PASS: TestOffline (57.88s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-762402
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-762402: exit status 85 (56.908112ms)

                                                
                                                
-- stdout --
	* Profile "addons-762402" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-762402"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-762402
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-762402: exit status 85 (57.320834ms)

                                                
                                                
-- stdout --
	* Profile "addons-762402" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-762402"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (122.51s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-762402 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-762402 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m2.513714976s)
--- PASS: TestAddons/Setup (122.51s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-762402 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-762402 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.4s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-762402 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-762402 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [69f0b611-7084-4d17-814a-0ed1e841dc08] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [69f0b611-7084-4d17-814a-0ed1e841dc08] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.002984272s
addons_test.go:694: (dbg) Run:  kubectl --context addons-762402 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-762402 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-762402 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.40s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.65s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-762402
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-762402: (16.372150665s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-762402
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-762402
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-762402
--- PASS: TestAddons/StoppedEnableDisable (16.65s)

                                                
                                    
x
+
TestCertOptions (23.66s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-350702 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1109 14:10:54.598789    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-350702 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (20.592987606s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-350702 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-350702 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-350702 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-350702" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-350702
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-350702: (2.442396372s)
--- PASS: TestCertOptions (23.66s)

                                                
                                    
x
+
TestCertExpiration (211.42s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-883873 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-883873 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (23.404972079s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-883873 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-883873 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.632488247s)
helpers_test.go:175: Cleaning up "cert-expiration-883873" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-883873
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-883873: (2.380506037s)
--- PASS: TestCertExpiration (211.42s)

                                                
                                    
x
+
TestForceSystemdFlag (27.22s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-559374 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-559374 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.473844191s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-559374 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-559374" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-559374
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-559374: (2.452022449s)
--- PASS: TestForceSystemdFlag (27.22s)

                                                
                                    
x
+
TestForceSystemdEnv (40.27s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-768316 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-768316 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.600533062s)
helpers_test.go:175: Cleaning up "force-systemd-env-768316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-768316
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-768316: (2.66837107s)
--- PASS: TestForceSystemdEnv (40.27s)

                                                
                                    
x
+
TestErrorSpam/setup (20.09s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-206931 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-206931 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-206931 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-206931 --driver=docker  --container-runtime=crio: (20.087817561s)
--- PASS: TestErrorSpam/setup (20.09s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 status
--- PASS: TestErrorSpam/status (0.90s)

                                                
                                    
x
+
TestErrorSpam/pause (6.61s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 pause: exit status 80 (2.362040607s)

                                                
                                                
-- stdout --
	* Pausing node nospam-206931 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:34:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 pause: exit status 80 (2.247245418s)

                                                
                                                
-- stdout --
	* Pausing node nospam-206931 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:34:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 pause: exit status 80 (1.999523851s)

                                                
                                                
-- stdout --
	* Pausing node nospam-206931 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:34:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.61s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.21s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 unpause: exit status 80 (1.78400907s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-206931 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:34:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 unpause: exit status 80 (1.92989261s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-206931 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:34:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 unpause: exit status 80 (1.496341902s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-206931 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:34:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.21s)

                                                
                                    
x
+
TestErrorSpam/stop (2.55s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 stop: (2.359158956s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-206931 --log_dir /tmp/nospam-206931 stop
--- PASS: TestErrorSpam/stop (2.55s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21139-5854/.minikube/files/etc/test/nested/copy/9365/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.4s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-630518 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-630518 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (37.396493271s)
--- PASS: TestFunctional/serial/StartWithProxy (37.40s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.78s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1109 13:35:18.786332    9365 config.go:182] Loaded profile config "functional-630518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-630518 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-630518 --alsologtostderr -v=8: (5.781851429s)
functional_test.go:678: soft start took 5.78303063s for "functional-630518" cluster.
I1109 13:35:24.568670    9365 config.go:182] Loaded profile config "functional-630518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (5.78s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-630518 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-630518 /tmp/TestFunctionalserialCacheCmdcacheadd_local4097694987/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 cache add minikube-local-cache-test:functional-630518
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 cache delete minikube-local-cache-test:functional-630518
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-630518
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-630518 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (267.275362ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 kubectl -- --context functional-630518 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-630518 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-630518 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1109 13:35:54.597612    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:35:54.607999    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:35:54.619389    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:35:54.640752    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:35:54.682078    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:35:54.763424    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:35:54.924870    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:35:55.246507    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:35:55.888507    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:35:57.170059    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:35:59.732748    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:36:04.854799    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:36:15.096500    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-630518 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.668787144s)
functional_test.go:776: restart took 47.668972579s for "functional-630518" cluster.
I1109 13:36:18.354314    9365 config.go:182] Loaded profile config "functional-630518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (47.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-630518 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-630518 logs: (1.103450622s)
--- PASS: TestFunctional/serial/LogsCmd (1.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 logs --file /tmp/TestFunctionalserialLogsFileCmd3609005315/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-630518 logs --file /tmp/TestFunctionalserialLogsFileCmd3609005315/001/logs.txt: (1.128597419s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.86s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-630518 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-630518
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-630518: exit status 115 (316.691395ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31130 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-630518 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.86s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-630518 config get cpus: exit status 14 (68.488603ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-630518 config get cpus: exit status 14 (59.455738ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (5.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-630518 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-630518 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 48747: os: process already finished
E1109 13:37:16.539980    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:38:38.462294    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:40:54.597313    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:41:22.304130    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:45:54.597101    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/DashboardCmd (5.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-630518 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-630518 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (150.27806ms)

                                                
                                                
-- stdout --
	* [functional-630518] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:36:49.991069   47910 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:36:49.991274   47910 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:36:49.991282   47910 out.go:374] Setting ErrFile to fd 2...
	I1109 13:36:49.991286   47910 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:36:49.991450   47910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:36:49.991847   47910 out.go:368] Setting JSON to false
	I1109 13:36:49.992747   47910 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1160,"bootTime":1762694250,"procs":252,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 13:36:49.992821   47910 start.go:143] virtualization: kvm guest
	I1109 13:36:49.995058   47910 out.go:179] * [functional-630518] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 13:36:49.996080   47910 notify.go:221] Checking for updates...
	I1109 13:36:49.996095   47910 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:36:49.997587   47910 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:36:49.998799   47910 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 13:36:49.999925   47910 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 13:36:50.000911   47910 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 13:36:50.001830   47910 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:36:50.003219   47910 config.go:182] Loaded profile config "functional-630518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:36:50.003678   47910 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:36:50.026124   47910 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 13:36:50.026193   47910 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:36:50.080368   47910 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-09 13:36:50.070517742 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 13:36:50.080472   47910 docker.go:319] overlay module found
	I1109 13:36:50.081886   47910 out.go:179] * Using the docker driver based on existing profile
	I1109 13:36:50.082759   47910 start.go:309] selected driver: docker
	I1109 13:36:50.082771   47910 start.go:930] validating driver "docker" against &{Name:functional-630518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-630518 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:36:50.082836   47910 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:36:50.084244   47910 out.go:203] 
	W1109 13:36:50.085151   47910 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1109 13:36:50.086041   47910 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-630518 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-630518 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-630518 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (161.398958ms)

                                                
                                                
-- stdout --
	* [functional-630518] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:36:49.831540   47773 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:36:49.831658   47773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:36:49.831668   47773 out.go:374] Setting ErrFile to fd 2...
	I1109 13:36:49.831672   47773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:36:49.831989   47773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:36:49.832378   47773 out.go:368] Setting JSON to false
	I1109 13:36:49.833358   47773 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1160,"bootTime":1762694250,"procs":255,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 13:36:49.833441   47773 start.go:143] virtualization: kvm guest
	I1109 13:36:49.835397   47773 out.go:179] * [functional-630518] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1109 13:36:49.836604   47773 notify.go:221] Checking for updates...
	I1109 13:36:49.836661   47773 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:36:49.837756   47773 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:36:49.838882   47773 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 13:36:49.840215   47773 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 13:36:49.841307   47773 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 13:36:49.842447   47773 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:36:49.843959   47773 config.go:182] Loaded profile config "functional-630518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:36:49.844603   47773 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:36:49.874820   47773 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 13:36:49.874913   47773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:36:49.930219   47773 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-09 13:36:49.920172626 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 13:36:49.930320   47773 docker.go:319] overlay module found
	I1109 13:36:49.931675   47773 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1109 13:36:49.932758   47773 start.go:309] selected driver: docker
	I1109 13:36:49.932769   47773 start.go:930] validating driver "docker" against &{Name:functional-630518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-630518 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:36:49.932838   47773 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:36:49.934241   47773 out.go:203] 
	W1109 13:36:49.935233   47773 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1109 13:36:49.936073   47773 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [006c9a3f-f922-4ee0-ae55-cedb1deb2e9f] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004470106s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-630518 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-630518 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-630518 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-630518 apply -f testdata/storage-provisioner/pod.yaml
I1109 13:36:33.096845    9365 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [fcce8540-0c82-44b4-a3de-3b3f4775cd2e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [fcce8540-0c82-44b4-a3de-3b3f4775cd2e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.004323191s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-630518 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-630518 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-630518 apply -f testdata/storage-provisioner/pod.yaml
I1109 13:36:42.680276    9365 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [869765d4-1600-4914-99bf-524f94ca75c5] Pending
helpers_test.go:352: "sp-pod" [869765d4-1600-4914-99bf-524f94ca75c5] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003566482s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-630518 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.11s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh -n functional-630518 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 cp functional-630518:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2380171248/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh -n functional-630518 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh -n functional-630518 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (15.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-630518 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-pk462" [a49b6785-754f-4891-bfe1-c8204b87ee33] Pending
helpers_test.go:352: "mysql-5bb876957f-pk462" [a49b6785-754f-4891-bfe1-c8204b87ee33] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-pk462" [a49b6785-754f-4891-bfe1-c8204b87ee33] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 13.002611935s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-630518 exec mysql-5bb876957f-pk462 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-630518 exec mysql-5bb876957f-pk462 -- mysql -ppassword -e "show databases;": exit status 1 (82.650199ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1109 13:36:38.337905    9365 retry.go:31] will retry after 709.716451ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-630518 exec mysql-5bb876957f-pk462 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-630518 exec mysql-5bb876957f-pk462 -- mysql -ppassword -e "show databases;": exit status 1 (80.787842ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1109 13:36:39.129011    9365 retry.go:31] will retry after 1.812962282s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-630518 exec mysql-5bb876957f-pk462 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (15.94s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9365/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "sudo cat /etc/test/nested/copy/9365/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9365.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "sudo cat /etc/ssl/certs/9365.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9365.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "sudo cat /usr/share/ca-certificates/9365.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/93652.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "sudo cat /etc/ssl/certs/93652.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/93652.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "sudo cat /usr/share/ca-certificates/93652.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-630518 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-630518 ssh "sudo systemctl is-active docker": exit status 1 (288.794971ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-630518 ssh "sudo systemctl is-active containerd": exit status 1 (286.797307ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-630518 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-630518 image ls --format short --alsologtostderr:
I1109 13:36:51.988001   49131 out.go:360] Setting OutFile to fd 1 ...
I1109 13:36:51.988098   49131 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:36:51.988107   49131 out.go:374] Setting ErrFile to fd 2...
I1109 13:36:51.988111   49131 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:36:51.988267   49131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
I1109 13:36:51.988793   49131 config.go:182] Loaded profile config "functional-630518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:36:51.988880   49131 config.go:182] Loaded profile config "functional-630518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:36:51.989240   49131 cli_runner.go:164] Run: docker container inspect functional-630518 --format={{.State.Status}}
I1109 13:36:52.006889   49131 ssh_runner.go:195] Run: systemctl --version
I1109 13:36:52.006952   49131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630518
I1109 13:36:52.023840   49131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/functional-630518/id_rsa Username:docker}
I1109 13:36:52.115683   49131 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 image ls --format table --alsologtostderr
2025/11/09 13:36:56 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-630518 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ localhost/my-image                      │ functional-630518  │ 73684214c7c3e │ 1.47MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ docker.io/library/nginx                 │ latest             │ d261fd19cb632 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-630518 image ls --format table --alsologtostderr:
I1109 13:36:55.995622   49924 out.go:360] Setting OutFile to fd 1 ...
I1109 13:36:55.995899   49924 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:36:55.995909   49924 out.go:374] Setting ErrFile to fd 2...
I1109 13:36:55.995912   49924 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:36:55.996104   49924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
I1109 13:36:55.996623   49924 config.go:182] Loaded profile config "functional-630518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:36:55.996728   49924 config.go:182] Loaded profile config "functional-630518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:36:55.997059   49924 cli_runner.go:164] Run: docker container inspect functional-630518 --format={{.State.Status}}
I1109 13:36:56.014405   49924 ssh_runner.go:195] Run: systemctl --version
I1109 13:36:56.014450   49924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630518
I1109 13:36:56.030914   49924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/functional-630518/id_rsa Username:docker}
I1109 13:36:56.122626   49924 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-630518 image ls --format json --alsologtostderr:
[{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5b020c2c1b1c719f262d227765cb904f6b4f03207a2251658df29361ebf30738","repoDigests":["docker.io/library/b15e72ac77ac16099ce69886e54d38aac3875b89c9a4bd0a8b90b38a66615cbb-tmp@sha256:075241c479169cc0738f1032c3b81af021e909bd40d5932c9e6acd52f97cce67"],"repoTags":[],"size":"1466132"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"d261fd19cb63238535ab80d4e1be1d9e7f6c8b5a2
8a820188968dd3e6f06072d","repoDigests":["docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad","docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b"],"repoTags":["docker.io/library/nginx:latest"],"size":"155489797"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"siz
e":"76103547"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f
21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"73684214c7c3e38103c9adbd15b6b62427a3d0e8396067bfe918dbe1a59d06c9","repoDigests":["localhost/my-image@sha256:82622ef991f8d24822c570ea14a270d8469684bf6c153a4423421d4a9b33693f"],"repoTags":["localhost/my-image:functional-630518"],"size":"1468744"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198
f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause
:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"5f1f5298c888daa46c4409ff4cefe
5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-630518 image ls --format json --alsologtostderr:
I1109 13:36:55.782830   49856 out.go:360] Setting OutFile to fd 1 ...
I1109 13:36:55.782934   49856 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:36:55.782940   49856 out.go:374] Setting ErrFile to fd 2...
I1109 13:36:55.782946   49856 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:36:55.783241   49856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
I1109 13:36:55.783941   49856 config.go:182] Loaded profile config "functional-630518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:36:55.784099   49856 config.go:182] Loaded profile config "functional-630518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:36:55.784511   49856 cli_runner.go:164] Run: docker container inspect functional-630518 --format={{.State.Status}}
I1109 13:36:55.804028   49856 ssh_runner.go:195] Run: systemctl --version
I1109 13:36:55.804083   49856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630518
I1109 13:36:55.821263   49856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/functional-630518/id_rsa Username:docker}
I1109 13:36:55.911449   49856 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-630518 image ls --format yaml --alsologtostderr:
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: d261fd19cb63238535ab80d4e1be1d9e7f6c8b5a28a820188968dd3e6f06072d
repoDigests:
- docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad
- docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b
repoTags:
- docker.io/library/nginx:latest
size: "155489797"
- id: 73684214c7c3e38103c9adbd15b6b62427a3d0e8396067bfe918dbe1a59d06c9
repoDigests:
- localhost/my-image@sha256:82622ef991f8d24822c570ea14a270d8469684bf6c153a4423421d4a9b33693f
repoTags:
- localhost/my-image:functional-630518
size: "1468744"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5b020c2c1b1c719f262d227765cb904f6b4f03207a2251658df29361ebf30738
repoDigests:
- docker.io/library/b15e72ac77ac16099ce69886e54d38aac3875b89c9a4bd0a8b90b38a66615cbb-tmp@sha256:075241c479169cc0738f1032c3b81af021e909bd40d5932c9e6acd52f97cce67
repoTags: []
size: "1466132"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1462480"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-630518 image ls --format yaml --alsologtostderr:
I1109 13:36:55.503557   49766 out.go:360] Setting OutFile to fd 1 ...
I1109 13:36:55.503688   49766 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:36:55.503700   49766 out.go:374] Setting ErrFile to fd 2...
I1109 13:36:55.503706   49766 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:36:55.503880   49766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
I1109 13:36:55.504398   49766 config.go:182] Loaded profile config "functional-630518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:36:55.504513   49766 config.go:182] Loaded profile config "functional-630518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:36:55.504985   49766 cli_runner.go:164] Run: docker container inspect functional-630518 --format={{.State.Status}}
I1109 13:36:55.527850   49766 ssh_runner.go:195] Run: systemctl --version
I1109 13:36:55.527898   49766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630518
I1109 13:36:55.549075   49766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/functional-630518/id_rsa Username:docker}
I1109 13:36:55.649242   49766 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-630518 ssh pgrep buildkitd: exit status 1 (291.169262ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 image build -t localhost/my-image:functional-630518 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-630518 image build -t localhost/my-image:functional-630518 testdata/build --alsologtostderr: (2.748459207s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-630518 image build -t localhost/my-image:functional-630518 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 5b020c2c1b1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-630518
--> 73684214c7c
Successfully tagged localhost/my-image:functional-630518
73684214c7c3e38103c9adbd15b6b62427a3d0e8396067bfe918dbe1a59d06c9
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-630518 image build -t localhost/my-image:functional-630518 testdata/build --alsologtostderr:
I1109 13:36:52.510691   49296 out.go:360] Setting OutFile to fd 1 ...
I1109 13:36:52.511081   49296 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:36:52.511091   49296 out.go:374] Setting ErrFile to fd 2...
I1109 13:36:52.511097   49296 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:36:52.511464   49296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
I1109 13:36:52.512249   49296 config.go:182] Loaded profile config "functional-630518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:36:52.513216   49296 config.go:182] Loaded profile config "functional-630518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:36:52.513793   49296 cli_runner.go:164] Run: docker container inspect functional-630518 --format={{.State.Status}}
I1109 13:36:52.534357   49296 ssh_runner.go:195] Run: systemctl --version
I1109 13:36:52.534506   49296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630518
I1109 13:36:52.556581   49296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/functional-630518/id_rsa Username:docker}
I1109 13:36:52.657114   49296 build_images.go:162] Building image from path: /tmp/build.3722207159.tar
I1109 13:36:52.657184   49296 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1109 13:36:52.666883   49296 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3722207159.tar
I1109 13:36:52.671200   49296 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3722207159.tar: stat -c "%s %y" /var/lib/minikube/build/build.3722207159.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3722207159.tar': No such file or directory
I1109 13:36:52.671230   49296 ssh_runner.go:362] scp /tmp/build.3722207159.tar --> /var/lib/minikube/build/build.3722207159.tar (3072 bytes)
I1109 13:36:52.692387   49296 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3722207159
I1109 13:36:52.700978   49296 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3722207159 -xf /var/lib/minikube/build/build.3722207159.tar
I1109 13:36:52.710081   49296 crio.go:315] Building image: /var/lib/minikube/build/build.3722207159
I1109 13:36:52.710146   49296 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-630518 /var/lib/minikube/build/build.3722207159 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1109 13:36:55.164957   49296 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-630518 /var/lib/minikube/build/build.3722207159 --cgroup-manager=cgroupfs: (2.454779175s)
I1109 13:36:55.165035   49296 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3722207159
I1109 13:36:55.173028   49296 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3722207159.tar
I1109 13:36:55.180750   49296 build_images.go:218] Built localhost/my-image:functional-630518 from /tmp/build.3722207159.tar
I1109 13:36:55.180782   49296 build_images.go:134] succeeded building to: functional-630518
I1109 13:36:55.180790   49296 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-630518
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-630518 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-630518 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-630518 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-630518 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 43207: os: process already finished
helpers_test.go:519: unable to terminate pid 42859: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-630518 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-630518 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [a24c3e15-dcb6-47e0-9cd1-b50d167bc576] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [a24c3e15-dcb6-47e0-9cd1-b50d167bc576] Running
E1109 13:36:35.578684    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.002742418s
I1109 13:36:40.022864    9365 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 image rm kicbase/echo-server:functional-630518 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-630518 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.2.63 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-630518 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "333.486632ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "59.730398ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "320.153749ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "60.408585ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-630518 /tmp/TestFunctionalparallelMountCmdany-port2005798725/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1762695402208404455" to /tmp/TestFunctionalparallelMountCmdany-port2005798725/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1762695402208404455" to /tmp/TestFunctionalparallelMountCmdany-port2005798725/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1762695402208404455" to /tmp/TestFunctionalparallelMountCmdany-port2005798725/001/test-1762695402208404455
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-630518 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (280.547446ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1109 13:36:42.489244    9365 retry.go:31] will retry after 361.41993ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  9 13:36 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  9 13:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  9 13:36 test-1762695402208404455
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh cat /mount-9p/test-1762695402208404455
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-630518 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [43ea0d73-96cc-4e14-bd8d-61a6ffed7685] Pending
helpers_test.go:352: "busybox-mount" [43ea0d73-96cc-4e14-bd8d-61a6ffed7685] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [43ea0d73-96cc-4e14-bd8d-61a6ffed7685] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [43ea0d73-96cc-4e14-bd8d-61a6ffed7685] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.002467093s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-630518 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-630518 /tmp/TestFunctionalparallelMountCmdany-port2005798725/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-630518 /tmp/TestFunctionalparallelMountCmdspecific-port3344709401/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-630518 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (265.896035ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1109 13:36:47.952902    9365 retry.go:31] will retry after 507.105394ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-630518 /tmp/TestFunctionalparallelMountCmdspecific-port3344709401/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-630518 ssh "sudo umount -f /mount-9p": exit status 1 (272.697954ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-630518 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-630518 /tmp/TestFunctionalparallelMountCmdspecific-port3344709401/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-630518 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1739493021/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-630518 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1739493021/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-630518 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1739493021/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-630518 ssh "findmnt -T" /mount1: exit status 1 (347.735198ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1109 13:36:49.824311    9365 retry.go:31] will retry after 268.979825ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-630518 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-630518 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1739493021/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-630518 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1739493021/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-630518 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1739493021/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-630518 service list: (1.68141847s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-630518 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-630518 service list -o json: (1.683255194s)
functional_test.go:1504: Took "1.683348518s" to run "out/minikube-linux-amd64 -p functional-630518 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-630518
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-630518
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-630518
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (113.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-555080 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m52.600096799s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (113.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-555080 kubectl -- rollout status deployment/busybox: (1.533092481s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 kubectl -- exec busybox-7b57f96db7-66gfm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 kubectl -- exec busybox-7b57f96db7-rxj9d -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 kubectl -- exec busybox-7b57f96db7-t2vsq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 kubectl -- exec busybox-7b57f96db7-66gfm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 kubectl -- exec busybox-7b57f96db7-rxj9d -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 kubectl -- exec busybox-7b57f96db7-t2vsq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 kubectl -- exec busybox-7b57f96db7-66gfm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 kubectl -- exec busybox-7b57f96db7-rxj9d -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 kubectl -- exec busybox-7b57f96db7-t2vsq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 kubectl -- exec busybox-7b57f96db7-66gfm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 kubectl -- exec busybox-7b57f96db7-66gfm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 kubectl -- exec busybox-7b57f96db7-rxj9d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 kubectl -- exec busybox-7b57f96db7-rxj9d -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 kubectl -- exec busybox-7b57f96db7-t2vsq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 kubectl -- exec busybox-7b57f96db7-t2vsq -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-555080 node add --alsologtostderr -v 5: (53.546194831s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-555080 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 cp testdata/cp-test.txt ha-555080:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 cp ha-555080:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile859086733/001/cp-test_ha-555080.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 cp ha-555080:/home/docker/cp-test.txt ha-555080-m02:/home/docker/cp-test_ha-555080_ha-555080-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m02 "sudo cat /home/docker/cp-test_ha-555080_ha-555080-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 cp ha-555080:/home/docker/cp-test.txt ha-555080-m03:/home/docker/cp-test_ha-555080_ha-555080-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m03 "sudo cat /home/docker/cp-test_ha-555080_ha-555080-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 cp ha-555080:/home/docker/cp-test.txt ha-555080-m04:/home/docker/cp-test_ha-555080_ha-555080-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m04 "sudo cat /home/docker/cp-test_ha-555080_ha-555080-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 cp testdata/cp-test.txt ha-555080-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 cp ha-555080-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile859086733/001/cp-test_ha-555080-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 cp ha-555080-m02:/home/docker/cp-test.txt ha-555080:/home/docker/cp-test_ha-555080-m02_ha-555080.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080 "sudo cat /home/docker/cp-test_ha-555080-m02_ha-555080.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 cp ha-555080-m02:/home/docker/cp-test.txt ha-555080-m03:/home/docker/cp-test_ha-555080-m02_ha-555080-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m03 "sudo cat /home/docker/cp-test_ha-555080-m02_ha-555080-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 cp ha-555080-m02:/home/docker/cp-test.txt ha-555080-m04:/home/docker/cp-test_ha-555080-m02_ha-555080-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m04 "sudo cat /home/docker/cp-test_ha-555080-m02_ha-555080-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 cp testdata/cp-test.txt ha-555080-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 cp ha-555080-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile859086733/001/cp-test_ha-555080-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 cp ha-555080-m03:/home/docker/cp-test.txt ha-555080:/home/docker/cp-test_ha-555080-m03_ha-555080.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080 "sudo cat /home/docker/cp-test_ha-555080-m03_ha-555080.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 cp ha-555080-m03:/home/docker/cp-test.txt ha-555080-m02:/home/docker/cp-test_ha-555080-m03_ha-555080-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m02 "sudo cat /home/docker/cp-test_ha-555080-m03_ha-555080-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 cp ha-555080-m03:/home/docker/cp-test.txt ha-555080-m04:/home/docker/cp-test_ha-555080-m03_ha-555080-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m04 "sudo cat /home/docker/cp-test_ha-555080-m03_ha-555080-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 cp testdata/cp-test.txt ha-555080-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 cp ha-555080-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile859086733/001/cp-test_ha-555080-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 cp ha-555080-m04:/home/docker/cp-test.txt ha-555080:/home/docker/cp-test_ha-555080-m04_ha-555080.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080 "sudo cat /home/docker/cp-test_ha-555080-m04_ha-555080.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 cp ha-555080-m04:/home/docker/cp-test.txt ha-555080-m02:/home/docker/cp-test_ha-555080-m04_ha-555080-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m02 "sudo cat /home/docker/cp-test_ha-555080-m04_ha-555080-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 cp ha-555080-m04:/home/docker/cp-test.txt ha-555080-m03:/home/docker/cp-test_ha-555080-m04_ha-555080-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 ssh -n ha-555080-m03 "sudo cat /home/docker/cp-test_ha-555080-m04_ha-555080-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-555080 node stop m02 --alsologtostderr -v 5: (18.551013958s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-555080 status --alsologtostderr -v 5: exit status 7 (655.957728ms)

                                                
                                                
-- stdout --
	ha-555080
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-555080-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-555080-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-555080-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:50:18.630048   73953 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:50:18.630314   73953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:50:18.630324   73953 out.go:374] Setting ErrFile to fd 2...
	I1109 13:50:18.630328   73953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:50:18.630490   73953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:50:18.630692   73953 out.go:368] Setting JSON to false
	I1109 13:50:18.630722   73953 mustload.go:66] Loading cluster: ha-555080
	I1109 13:50:18.630832   73953 notify.go:221] Checking for updates...
	I1109 13:50:18.631069   73953 config.go:182] Loaded profile config "ha-555080": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:50:18.631082   73953 status.go:174] checking status of ha-555080 ...
	I1109 13:50:18.631461   73953 cli_runner.go:164] Run: docker container inspect ha-555080 --format={{.State.Status}}
	I1109 13:50:18.651917   73953 status.go:371] ha-555080 host status = "Running" (err=<nil>)
	I1109 13:50:18.651936   73953 host.go:66] Checking if "ha-555080" exists ...
	I1109 13:50:18.652160   73953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-555080
	I1109 13:50:18.668982   73953 host.go:66] Checking if "ha-555080" exists ...
	I1109 13:50:18.669196   73953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:50:18.669238   73953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-555080
	I1109 13:50:18.686071   73953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/ha-555080/id_rsa Username:docker}
	I1109 13:50:18.775349   73953 ssh_runner.go:195] Run: systemctl --version
	I1109 13:50:18.781926   73953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:50:18.793514   73953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:50:18.852090   73953 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-09 13:50:18.84175858 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 13:50:18.852602   73953 kubeconfig.go:125] found "ha-555080" server: "https://192.168.49.254:8443"
	I1109 13:50:18.852634   73953 api_server.go:166] Checking apiserver status ...
	I1109 13:50:18.852684   73953 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:50:18.863683   73953 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup
	W1109 13:50:18.871395   73953 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:50:18.871451   73953 ssh_runner.go:195] Run: ls
	I1109 13:50:18.875307   73953 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1109 13:50:18.879134   73953 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1109 13:50:18.879151   73953 status.go:463] ha-555080 apiserver status = Running (err=<nil>)
	I1109 13:50:18.879160   73953 status.go:176] ha-555080 status: &{Name:ha-555080 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 13:50:18.879182   73953 status.go:174] checking status of ha-555080-m02 ...
	I1109 13:50:18.879383   73953 cli_runner.go:164] Run: docker container inspect ha-555080-m02 --format={{.State.Status}}
	I1109 13:50:18.896344   73953 status.go:371] ha-555080-m02 host status = "Stopped" (err=<nil>)
	I1109 13:50:18.896359   73953 status.go:384] host is not running, skipping remaining checks
	I1109 13:50:18.896364   73953 status.go:176] ha-555080-m02 status: &{Name:ha-555080-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 13:50:18.896377   73953 status.go:174] checking status of ha-555080-m03 ...
	I1109 13:50:18.896588   73953 cli_runner.go:164] Run: docker container inspect ha-555080-m03 --format={{.State.Status}}
	I1109 13:50:18.912998   73953 status.go:371] ha-555080-m03 host status = "Running" (err=<nil>)
	I1109 13:50:18.913017   73953 host.go:66] Checking if "ha-555080-m03" exists ...
	I1109 13:50:18.913280   73953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-555080-m03
	I1109 13:50:18.929883   73953 host.go:66] Checking if "ha-555080-m03" exists ...
	I1109 13:50:18.930092   73953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:50:18.930122   73953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-555080-m03
	I1109 13:50:18.946854   73953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/ha-555080-m03/id_rsa Username:docker}
	I1109 13:50:19.036282   73953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:50:19.048489   73953 kubeconfig.go:125] found "ha-555080" server: "https://192.168.49.254:8443"
	I1109 13:50:19.048517   73953 api_server.go:166] Checking apiserver status ...
	I1109 13:50:19.048553   73953 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:50:19.058470   73953 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup
	W1109 13:50:19.065913   73953 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1169/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:50:19.065961   73953 ssh_runner.go:195] Run: ls
	I1109 13:50:19.069305   73953 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1109 13:50:19.074561   73953 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1109 13:50:19.074584   73953 status.go:463] ha-555080-m03 apiserver status = Running (err=<nil>)
	I1109 13:50:19.074594   73953 status.go:176] ha-555080-m03 status: &{Name:ha-555080-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 13:50:19.074614   73953 status.go:174] checking status of ha-555080-m04 ...
	I1109 13:50:19.074854   73953 cli_runner.go:164] Run: docker container inspect ha-555080-m04 --format={{.State.Status}}
	I1109 13:50:19.093257   73953 status.go:371] ha-555080-m04 host status = "Running" (err=<nil>)
	I1109 13:50:19.093274   73953 host.go:66] Checking if "ha-555080-m04" exists ...
	I1109 13:50:19.093502   73953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-555080-m04
	I1109 13:50:19.110019   73953 host.go:66] Checking if "ha-555080-m04" exists ...
	I1109 13:50:19.110253   73953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:50:19.110282   73953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-555080-m04
	I1109 13:50:19.127496   73953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/ha-555080-m04/id_rsa Username:docker}
	I1109 13:50:19.215071   73953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:50:19.226775   73953 status.go:176] ha-555080-m04 status: &{Name:ha-555080-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-555080 node start m02 --alsologtostderr -v 5: (13.601273196s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (108.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 stop --alsologtostderr -v 5
E1109 13:50:54.596912    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:51:25.253493    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:51:25.259850    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:51:25.271182    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:51:25.292463    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:51:25.333778    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:51:25.415106    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:51:25.576560    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:51:25.898194    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:51:26.540190    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:51:27.821752    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-555080 stop --alsologtostderr -v 5: (55.051613703s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 start --wait true --alsologtostderr -v 5
E1109 13:51:30.382975    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:51:35.504284    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:51:45.745718    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:52:06.227184    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:52:17.666046    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-555080 start --wait true --alsologtostderr -v 5: (53.30767478s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (108.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-555080 node delete m03 --alsologtostderr -v 5: (9.623431551s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (43.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 stop --alsologtostderr -v 5
E1109 13:52:47.188746    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-555080 stop --alsologtostderr -v 5: (43.388566544s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-555080 status --alsologtostderr -v 5: exit status 7 (107.669747ms)

                                                
                                                
-- stdout --
	ha-555080
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-555080-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-555080-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:53:18.238381   87997 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:53:18.238467   87997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:53:18.238475   87997 out.go:374] Setting ErrFile to fd 2...
	I1109 13:53:18.238479   87997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:53:18.238628   87997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 13:53:18.238789   87997 out.go:368] Setting JSON to false
	I1109 13:53:18.238817   87997 mustload.go:66] Loading cluster: ha-555080
	I1109 13:53:18.238942   87997 notify.go:221] Checking for updates...
	I1109 13:53:18.239149   87997 config.go:182] Loaded profile config "ha-555080": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:53:18.239161   87997 status.go:174] checking status of ha-555080 ...
	I1109 13:53:18.239562   87997 cli_runner.go:164] Run: docker container inspect ha-555080 --format={{.State.Status}}
	I1109 13:53:18.256657   87997 status.go:371] ha-555080 host status = "Stopped" (err=<nil>)
	I1109 13:53:18.256677   87997 status.go:384] host is not running, skipping remaining checks
	I1109 13:53:18.256684   87997 status.go:176] ha-555080 status: &{Name:ha-555080 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 13:53:18.256715   87997 status.go:174] checking status of ha-555080-m02 ...
	I1109 13:53:18.256924   87997 cli_runner.go:164] Run: docker container inspect ha-555080-m02 --format={{.State.Status}}
	I1109 13:53:18.273943   87997 status.go:371] ha-555080-m02 host status = "Stopped" (err=<nil>)
	I1109 13:53:18.273958   87997 status.go:384] host is not running, skipping remaining checks
	I1109 13:53:18.273962   87997 status.go:176] ha-555080-m02 status: &{Name:ha-555080-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 13:53:18.273975   87997 status.go:174] checking status of ha-555080-m04 ...
	I1109 13:53:18.274209   87997 cli_runner.go:164] Run: docker container inspect ha-555080-m04 --format={{.State.Status}}
	I1109 13:53:18.290450   87997 status.go:371] ha-555080-m04 host status = "Stopped" (err=<nil>)
	I1109 13:53:18.290468   87997 status.go:384] host is not running, skipping remaining checks
	I1109 13:53:18.290473   87997 status.go:176] ha-555080-m04 status: &{Name:ha-555080-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (43.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (55.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1109 13:54:09.110913    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-555080 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (54.812694634s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (55.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (39.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-555080 node add --control-plane --alsologtostderr -v 5: (38.319555949s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-555080 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (39.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (67.82s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-926498 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1109 13:55:54.605790    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-926498 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m7.8242675s)
--- PASS: TestJSONOutput/start/Command (67.82s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.97s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-926498 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-926498 --output=json --user=testUser: (7.968843449s)
--- PASS: TestJSONOutput/stop/Command (7.97s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-235453 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-235453 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (71.345531ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"39d4f886-a506-497e-a8f0-3c919ce1f142","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-235453] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4cf5b9aa-81a3-4fae-82ca-5146f20b0903","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21139"}}
	{"specversion":"1.0","id":"d8a53692-187c-493a-aa31-94f4772a8f19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b29087dc-2b5c-4230-8079-a280ea1f5053","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig"}}
	{"specversion":"1.0","id":"59b2d78e-106f-44cb-a6f7-6c1f3f14698f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube"}}
	{"specversion":"1.0","id":"d967c0bc-28f9-4a4d-b3ad-74eb75ab1ab6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"bbbddd14-10e2-4296-9753-bb419a642c73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6c4210c7-67fa-4b75-8c6c-0a833d1c7fbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-235453" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-235453
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.24s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-766438 --network=
E1109 13:56:52.952229    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-766438 --network=: (26.078753676s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-766438" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-766438
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-766438: (2.146296186s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.24s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.68s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-365149 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-365149 --network=bridge: (20.716164512s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-365149" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-365149
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-365149: (1.945084681s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.68s)

                                                
                                    
x
+
TestKicExistingNetwork (24.2s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1109 13:57:18.449338    9365 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1109 13:57:18.467115    9365 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1109 13:57:18.467169    9365 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1109 13:57:18.467184    9365 cli_runner.go:164] Run: docker network inspect existing-network
W1109 13:57:18.483123    9365 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1109 13:57:18.483144    9365 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1109 13:57:18.483165    9365 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1109 13:57:18.483312    9365 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1109 13:57:18.500758    9365 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c085420cd01a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:dd:1a:ae:de:73} reservation:<nil>}
I1109 13:57:18.501109    9365 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003778d0}
I1109 13:57:18.501132    9365 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1109 13:57:18.501166    9365 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1109 13:57:18.554974    9365 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-994821 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-994821 --network=existing-network: (22.078761494s)
helpers_test.go:175: Cleaning up "existing-network-994821" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-994821
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-994821: (1.988619535s)
I1109 13:57:42.637998    9365 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.20s)

                                                
                                    
x
+
TestKicCustomSubnet (24.58s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-344466 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-344466 --subnet=192.168.60.0/24: (22.403735288s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-344466 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-344466" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-344466
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-344466: (2.154980383s)
--- PASS: TestKicCustomSubnet (24.58s)

                                                
                                    
x
+
TestKicStaticIP (24.55s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-945896 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-945896 --static-ip=192.168.200.200: (22.286523214s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-945896 ip
helpers_test.go:175: Cleaning up "static-ip-945896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-945896
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-945896: (2.124252823s)
--- PASS: TestKicStaticIP (24.55s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (44.98s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-583194 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-583194 --driver=docker  --container-runtime=crio: (19.526191353s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-585842 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-585842 --driver=docker  --container-runtime=crio: (19.616418957s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-583194
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-585842
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-585842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-585842
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-585842: (2.327126475s)
helpers_test.go:175: Cleaning up "first-583194" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-583194
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-583194: (2.351405052s)
--- PASS: TestMinikubeProfile (44.98s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-996783 --memory=3072 --mount-string /tmp/TestMountStartserial1324338817/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-996783 --memory=3072 --mount-string /tmp/TestMountStartserial1324338817/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.786585124s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-996783 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-007526 --memory=3072 --mount-string /tmp/TestMountStartserial1324338817/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-007526 --memory=3072 --mount-string /tmp/TestMountStartserial1324338817/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.886346052s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-007526 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-996783 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-996783 --alsologtostderr -v=5: (1.682189354s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-007526 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-007526
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-007526: (1.241399048s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.11s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-007526
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-007526: (6.113322219s)
--- PASS: TestMountStart/serial/RestartStopped (7.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-007526 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (91.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-183462 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1109 14:00:54.598150    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-183462 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m31.165739807s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (91.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183462 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183462 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-183462 -- rollout status deployment/busybox: (1.670570833s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183462 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183462 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183462 -- exec busybox-7b57f96db7-bwtmt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183462 -- exec busybox-7b57f96db7-dbhw6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183462 -- exec busybox-7b57f96db7-bwtmt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183462 -- exec busybox-7b57f96db7-dbhw6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183462 -- exec busybox-7b57f96db7-bwtmt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183462 -- exec busybox-7b57f96db7-dbhw6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.05s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183462 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183462 -- exec busybox-7b57f96db7-bwtmt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183462 -- exec busybox-7b57f96db7-bwtmt -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183462 -- exec busybox-7b57f96db7-dbhw6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183462 -- exec busybox-7b57f96db7-dbhw6 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-183462 -v=5 --alsologtostderr
E1109 14:01:25.253198    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-183462 -v=5 --alsologtostderr: (52.595168249s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.20s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-183462 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 cp testdata/cp-test.txt multinode-183462:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 ssh -n multinode-183462 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 cp multinode-183462:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3901530868/001/cp-test_multinode-183462.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 ssh -n multinode-183462 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 cp multinode-183462:/home/docker/cp-test.txt multinode-183462-m02:/home/docker/cp-test_multinode-183462_multinode-183462-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 ssh -n multinode-183462 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 ssh -n multinode-183462-m02 "sudo cat /home/docker/cp-test_multinode-183462_multinode-183462-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 cp multinode-183462:/home/docker/cp-test.txt multinode-183462-m03:/home/docker/cp-test_multinode-183462_multinode-183462-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 ssh -n multinode-183462 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 ssh -n multinode-183462-m03 "sudo cat /home/docker/cp-test_multinode-183462_multinode-183462-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 cp testdata/cp-test.txt multinode-183462-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 ssh -n multinode-183462-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 cp multinode-183462-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3901530868/001/cp-test_multinode-183462-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 ssh -n multinode-183462-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 cp multinode-183462-m02:/home/docker/cp-test.txt multinode-183462:/home/docker/cp-test_multinode-183462-m02_multinode-183462.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 ssh -n multinode-183462-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 ssh -n multinode-183462 "sudo cat /home/docker/cp-test_multinode-183462-m02_multinode-183462.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 cp multinode-183462-m02:/home/docker/cp-test.txt multinode-183462-m03:/home/docker/cp-test_multinode-183462-m02_multinode-183462-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 ssh -n multinode-183462-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 ssh -n multinode-183462-m03 "sudo cat /home/docker/cp-test_multinode-183462-m02_multinode-183462-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 cp testdata/cp-test.txt multinode-183462-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 ssh -n multinode-183462-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 cp multinode-183462-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3901530868/001/cp-test_multinode-183462-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 ssh -n multinode-183462-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 cp multinode-183462-m03:/home/docker/cp-test.txt multinode-183462:/home/docker/cp-test_multinode-183462-m03_multinode-183462.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 ssh -n multinode-183462-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 ssh -n multinode-183462 "sudo cat /home/docker/cp-test_multinode-183462-m03_multinode-183462.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 cp multinode-183462-m03:/home/docker/cp-test.txt multinode-183462-m02:/home/docker/cp-test_multinode-183462-m03_multinode-183462-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 ssh -n multinode-183462-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 ssh -n multinode-183462-m02 "sudo cat /home/docker/cp-test_multinode-183462-m03_multinode-183462-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-183462 node stop m03: (1.245107226s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-183462 status: exit status 7 (466.188285ms)

                                                
                                                
-- stdout --
	multinode-183462
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-183462-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-183462-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-183462 status --alsologtostderr: exit status 7 (459.385101ms)

                                                
                                                
-- stdout --
	multinode-183462
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-183462-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-183462-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:02:19.637945  148140 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:02:19.638040  148140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:02:19.638047  148140 out.go:374] Setting ErrFile to fd 2...
	I1109 14:02:19.638051  148140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:02:19.638259  148140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:02:19.638404  148140 out.go:368] Setting JSON to false
	I1109 14:02:19.638436  148140 mustload.go:66] Loading cluster: multinode-183462
	I1109 14:02:19.638467  148140 notify.go:221] Checking for updates...
	I1109 14:02:19.638965  148140 config.go:182] Loaded profile config "multinode-183462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:02:19.638984  148140 status.go:174] checking status of multinode-183462 ...
	I1109 14:02:19.639499  148140 cli_runner.go:164] Run: docker container inspect multinode-183462 --format={{.State.Status}}
	I1109 14:02:19.657555  148140 status.go:371] multinode-183462 host status = "Running" (err=<nil>)
	I1109 14:02:19.657575  148140 host.go:66] Checking if "multinode-183462" exists ...
	I1109 14:02:19.657829  148140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-183462
	I1109 14:02:19.674077  148140 host.go:66] Checking if "multinode-183462" exists ...
	I1109 14:02:19.674330  148140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:02:19.674378  148140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-183462
	I1109 14:02:19.690415  148140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/multinode-183462/id_rsa Username:docker}
	I1109 14:02:19.779234  148140 ssh_runner.go:195] Run: systemctl --version
	I1109 14:02:19.784975  148140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:02:19.796211  148140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:02:19.850785  148140 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-09 14:02:19.840388009 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:02:19.851255  148140 kubeconfig.go:125] found "multinode-183462" server: "https://192.168.67.2:8443"
	I1109 14:02:19.851279  148140 api_server.go:166] Checking apiserver status ...
	I1109 14:02:19.851307  148140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:02:19.862142  148140 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1237/cgroup
	W1109 14:02:19.869561  148140 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1237/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:02:19.869602  148140 ssh_runner.go:195] Run: ls
	I1109 14:02:19.872753  148140 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1109 14:02:19.876828  148140 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1109 14:02:19.876850  148140 status.go:463] multinode-183462 apiserver status = Running (err=<nil>)
	I1109 14:02:19.876863  148140 status.go:176] multinode-183462 status: &{Name:multinode-183462 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:02:19.876879  148140 status.go:174] checking status of multinode-183462-m02 ...
	I1109 14:02:19.877087  148140 cli_runner.go:164] Run: docker container inspect multinode-183462-m02 --format={{.State.Status}}
	I1109 14:02:19.893317  148140 status.go:371] multinode-183462-m02 host status = "Running" (err=<nil>)
	I1109 14:02:19.893335  148140 host.go:66] Checking if "multinode-183462-m02" exists ...
	I1109 14:02:19.893602  148140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-183462-m02
	I1109 14:02:19.909765  148140 host.go:66] Checking if "multinode-183462-m02" exists ...
	I1109 14:02:19.910013  148140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:02:19.910049  148140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-183462-m02
	I1109 14:02:19.925835  148140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32910 SSHKeyPath:/home/jenkins/minikube-integration/21139-5854/.minikube/machines/multinode-183462-m02/id_rsa Username:docker}
	I1109 14:02:20.014226  148140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:02:20.025427  148140 status.go:176] multinode-183462-m02 status: &{Name:multinode-183462-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:02:20.025462  148140 status.go:174] checking status of multinode-183462-m03 ...
	I1109 14:02:20.025721  148140 cli_runner.go:164] Run: docker container inspect multinode-183462-m03 --format={{.State.Status}}
	I1109 14:02:20.042989  148140 status.go:371] multinode-183462-m03 host status = "Stopped" (err=<nil>)
	I1109 14:02:20.043006  148140 status.go:384] host is not running, skipping remaining checks
	I1109 14:02:20.043011  148140 status.go:176] multinode-183462-m03 status: &{Name:multinode-183462-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-183462 node start m03 -v=5 --alsologtostderr: (6.404070257s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (57.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-183462
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-183462
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-183462: (31.22113107s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-183462 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-183462 --wait=true -v=5 --alsologtostderr: (25.954841437s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-183462
--- PASS: TestMultiNode/serial/RestartKeepsNodes (57.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-183462 node delete m03: (4.342441885s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (19.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-183462 stop: (19.182392161s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-183462 status: exit status 7 (92.638198ms)

                                                
                                                
-- stdout --
	multinode-183462
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-183462-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-183462 status --alsologtostderr: exit status 7 (91.504198ms)

                                                
                                                
-- stdout --
	multinode-183462
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-183462-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:03:48.622460  157019 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:03:48.622920  157019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:03:48.622932  157019 out.go:374] Setting ErrFile to fd 2...
	I1109 14:03:48.622938  157019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:03:48.623147  157019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:03:48.623504  157019 out.go:368] Setting JSON to false
	I1109 14:03:48.623537  157019 mustload.go:66] Loading cluster: multinode-183462
	I1109 14:03:48.623719  157019 notify.go:221] Checking for updates...
	I1109 14:03:48.624608  157019 config.go:182] Loaded profile config "multinode-183462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:48.624623  157019 status.go:174] checking status of multinode-183462 ...
	I1109 14:03:48.625026  157019 cli_runner.go:164] Run: docker container inspect multinode-183462 --format={{.State.Status}}
	I1109 14:03:48.643624  157019 status.go:371] multinode-183462 host status = "Stopped" (err=<nil>)
	I1109 14:03:48.643649  157019 status.go:384] host is not running, skipping remaining checks
	I1109 14:03:48.643657  157019 status.go:176] multinode-183462 status: &{Name:multinode-183462 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:03:48.643702  157019 status.go:174] checking status of multinode-183462-m02 ...
	I1109 14:03:48.643908  157019 cli_runner.go:164] Run: docker container inspect multinode-183462-m02 --format={{.State.Status}}
	I1109 14:03:48.660702  157019 status.go:371] multinode-183462-m02 host status = "Stopped" (err=<nil>)
	I1109 14:03:48.660742  157019 status.go:384] host is not running, skipping remaining checks
	I1109 14:03:48.660751  157019 status.go:176] multinode-183462-m02 status: &{Name:multinode-183462-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (19.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (41.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-183462 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-183462 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (40.713416318s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183462 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (41.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-183462
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-183462-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-183462-m02 --driver=docker  --container-runtime=crio: exit status 14 (78.561829ms)

                                                
                                                
-- stdout --
	* [multinode-183462-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-183462-m02' is duplicated with machine name 'multinode-183462-m02' in profile 'multinode-183462'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-183462-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-183462-m03 --driver=docker  --container-runtime=crio: (20.728590152s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-183462
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-183462: exit status 80 (276.854836ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-183462 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-183462-m03 already exists in multinode-183462-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-183462-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-183462-m03: (2.354223755s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.49s)

                                                
                                    
x
+
TestPreload (82.27s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-772044 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-772044 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (45.326166447s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-772044 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-772044 image pull gcr.io/k8s-minikube/busybox: (1.074895377s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-772044
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-772044: (5.939103618s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-772044 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1109 14:05:54.598163    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-772044 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (27.382210458s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-772044 image list
helpers_test.go:175: Cleaning up "test-preload-772044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-772044
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-772044: (2.337938838s)
--- PASS: TestPreload (82.27s)

                                                
                                    
x
+
TestScheduledStopUnix (95.86s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-806284 --memory=3072 --driver=docker  --container-runtime=crio
E1109 14:06:25.253367    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-806284 --memory=3072 --driver=docker  --container-runtime=crio: (19.3384843s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-806284 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-806284 -n scheduled-stop-806284
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-806284 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1109 14:06:39.599975    9365 retry.go:31] will retry after 53.399µs: open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/scheduled-stop-806284/pid: no such file or directory
I1109 14:06:39.601142    9365 retry.go:31] will retry after 168.75µs: open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/scheduled-stop-806284/pid: no such file or directory
I1109 14:06:39.602280    9365 retry.go:31] will retry after 250.669µs: open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/scheduled-stop-806284/pid: no such file or directory
I1109 14:06:39.603401    9365 retry.go:31] will retry after 386.744µs: open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/scheduled-stop-806284/pid: no such file or directory
I1109 14:06:39.604522    9365 retry.go:31] will retry after 257.417µs: open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/scheduled-stop-806284/pid: no such file or directory
I1109 14:06:39.605653    9365 retry.go:31] will retry after 735.22µs: open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/scheduled-stop-806284/pid: no such file or directory
I1109 14:06:39.606769    9365 retry.go:31] will retry after 877.214µs: open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/scheduled-stop-806284/pid: no such file or directory
I1109 14:06:39.607885    9365 retry.go:31] will retry after 2.482184ms: open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/scheduled-stop-806284/pid: no such file or directory
I1109 14:06:39.611072    9365 retry.go:31] will retry after 2.488276ms: open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/scheduled-stop-806284/pid: no such file or directory
I1109 14:06:39.614274    9365 retry.go:31] will retry after 4.847655ms: open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/scheduled-stop-806284/pid: no such file or directory
I1109 14:06:39.619477    9365 retry.go:31] will retry after 3.462258ms: open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/scheduled-stop-806284/pid: no such file or directory
I1109 14:06:39.623680    9365 retry.go:31] will retry after 4.582063ms: open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/scheduled-stop-806284/pid: no such file or directory
I1109 14:06:39.628872    9365 retry.go:31] will retry after 14.336449ms: open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/scheduled-stop-806284/pid: no such file or directory
I1109 14:06:39.644074    9365 retry.go:31] will retry after 19.44455ms: open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/scheduled-stop-806284/pid: no such file or directory
I1109 14:06:39.664282    9365 retry.go:31] will retry after 28.664778ms: open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/scheduled-stop-806284/pid: no such file or directory
I1109 14:06:39.693496    9365 retry.go:31] will retry after 64.506365ms: open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/scheduled-stop-806284/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-806284 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-806284 -n scheduled-stop-806284
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-806284
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-806284 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1109 14:07:48.316624    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/functional-630518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-806284
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-806284: exit status 7 (77.191155ms)

                                                
                                                
-- stdout --
	scheduled-stop-806284
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-806284 -n scheduled-stop-806284
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-806284 -n scheduled-stop-806284: exit status 7 (75.911279ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-806284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-806284
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-806284: (5.058872174s)
--- PASS: TestScheduledStopUnix (95.86s)

                                                
                                    
x
+
TestInsufficientStorage (9.51s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-589774 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-589774 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.084509837s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4557e203-c2d5-4283-ba38-3c9da8c50a6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-589774] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8696870b-452b-4f77-aa24-94eb9c14ae5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21139"}}
	{"specversion":"1.0","id":"24ab9d74-b901-4405-b290-9233e3542977","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4419918f-91da-4a55-b0ae-aa7beba6992c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig"}}
	{"specversion":"1.0","id":"0f857c7a-e94c-4156-b082-fa2fa1ed7b39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube"}}
	{"specversion":"1.0","id":"d2f408c1-6b68-4f5a-961f-7155c80c6731","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"75428217-8404-414b-a4cb-149c9dd6e1b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3224c789-2106-45fa-a762-858f97109777","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a42179d1-1895-44ac-b68c-f3e87835b0a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"6a06c92e-e467-41a1-8d55-c2ec54304c28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1299cab1-5552-4389-a2d2-6d8f9ead394e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"1cbfdab9-c98f-4f2a-bb49-8afa90f0e50a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-589774\" primary control-plane node in \"insufficient-storage-589774\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4c66773f-91cf-41e8-9dbf-2588a9520b75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1761985721-21837 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5171b82c-2fbc-4f2a-8893-fdae77bb897a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ae9355d-6513-4241-96fa-eaada0abdbb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-589774 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-589774 --output=json --layout=cluster: exit status 7 (272.056838ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-589774","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-589774","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 14:08:03.045103  177139 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-589774" does not appear in /home/jenkins/minikube-integration/21139-5854/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-589774 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-589774 --output=json --layout=cluster: exit status 7 (272.681314ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-589774","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-589774","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 14:08:03.318858  177251 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-589774" does not appear in /home/jenkins/minikube-integration/21139-5854/kubeconfig
	E1109 14:08:03.328746  177251 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/insufficient-storage-589774/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-589774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-589774
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-589774: (1.88168548s)
--- PASS: TestInsufficientStorage (9.51s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (51.09s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3804411376 start -p running-upgrade-958815 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3804411376 start -p running-upgrade-958815 --memory=3072 --vm-driver=docker  --container-runtime=crio: (23.234920199s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-958815 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-958815 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.570594168s)
helpers_test.go:175: Cleaning up "running-upgrade-958815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-958815
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-958815: (2.754504202s)
--- PASS: TestRunningBinaryUpgrade (51.09s)

                                                
                                    
x
+
TestKubernetesUpgrade (311.76s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-755159 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-755159 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.437516013s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-755159
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-755159: (2.35492507s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-755159 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-755159 status --format={{.Host}}: exit status 7 (95.564262ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-755159 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-755159 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.825283979s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-755159 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-755159 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-755159 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (358.624774ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-755159] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-755159
	    minikube start -p kubernetes-upgrade-755159 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7551592 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-755159 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-755159 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-755159 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.159017587s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-755159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-755159
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-755159: (2.44105566s)
--- PASS: TestKubernetesUpgrade (311.76s)

                                                
                                    
x
+
TestMissingContainerUpgrade (59.81s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2226324069 start -p missing-upgrade-877671 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2226324069 start -p missing-upgrade-877671 --memory=3072 --driver=docker  --container-runtime=crio: (23.992108849s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-877671
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-877671
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-877671 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-877671 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.597270511s)
helpers_test.go:175: Cleaning up "missing-upgrade-877671" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-877671
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-877671: (2.019511101s)
--- PASS: TestMissingContainerUpgrade (59.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (58.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1001229545 start -p stopped-upgrade-039106 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1001229545 start -p stopped-upgrade-039106 --memory=3072 --vm-driver=docker  --container-runtime=crio: (39.117819954s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1001229545 -p stopped-upgrade-039106 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1001229545 -p stopped-upgrade-039106 stop: (2.232059279s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-039106 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1109 14:08:57.667692    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-039106 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.446936004s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (58.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-039106
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-300116 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-300116 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (85.140295ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-300116] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (23.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-300116 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-300116 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.806936798s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-300116 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (23.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-300116 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-300116 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.30431201s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-300116 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-300116 status -o json: exit status 2 (284.795236ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-300116","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-300116
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-300116: (1.970132568s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-300116 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-300116 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.129190932s)
--- PASS: TestNoKubernetes/serial/Start (4.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21139-5854/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-300116 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-300116 "sudo systemctl is-active --quiet service kubelet": exit status 1 (264.396744ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (15.614307196s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (16.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-300116
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-300116: (1.283537402s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-300116 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-300116 --driver=docker  --container-runtime=crio: (6.971478131s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.97s)

                                                
                                    
x
+
TestPause/serial/Start (37.32s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-092489 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-092489 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (37.317148087s)
--- PASS: TestPause/serial/Start (37.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-300116 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-300116 "sudo systemctl is-active --quiet service kubelet": exit status 1 (279.842714ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-593530 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-593530 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (164.798984ms)

                                                
                                                
-- stdout --
	* [false-593530] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:10:43.658803  222033 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:10:43.659069  222033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:10:43.659080  222033 out.go:374] Setting ErrFile to fd 2...
	I1109 14:10:43.659086  222033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:10:43.659341  222033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-5854/.minikube/bin
	I1109 14:10:43.659917  222033 out.go:368] Setting JSON to false
	I1109 14:10:43.661218  222033 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3194,"bootTime":1762694250,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:10:43.661325  222033 start.go:143] virtualization: kvm guest
	I1109 14:10:43.664812  222033 out.go:179] * [false-593530] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:10:43.665993  222033 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:10:43.665999  222033 notify.go:221] Checking for updates...
	I1109 14:10:43.667277  222033 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:10:43.668184  222033 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-5854/kubeconfig
	I1109 14:10:43.669157  222033 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-5854/.minikube
	I1109 14:10:43.670042  222033 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:10:43.671022  222033 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:10:43.672269  222033 config.go:182] Loaded profile config "cert-expiration-883873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:10:43.672363  222033 config.go:182] Loaded profile config "kubernetes-upgrade-755159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:10:43.672446  222033 config.go:182] Loaded profile config "pause-092489": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:10:43.672544  222033 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:10:43.698766  222033 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1109 14:10:43.698845  222033 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:10:43.756494  222033 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-09 14:10:43.745775952 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1109 14:10:43.756602  222033 docker.go:319] overlay module found
	I1109 14:10:43.758039  222033 out.go:179] * Using the docker driver based on user configuration
	I1109 14:10:43.759216  222033 start.go:309] selected driver: docker
	I1109 14:10:43.759233  222033 start.go:930] validating driver "docker" against <nil>
	I1109 14:10:43.759244  222033 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:10:43.760759  222033 out.go:203] 
	W1109 14:10:43.761797  222033 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1109 14:10:43.762809  222033 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-593530 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-593530

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-593530

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-593530

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-593530

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-593530

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-593530

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-593530

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-593530

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-593530

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-593530

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-593530

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-593530" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-593530" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 09 Nov 2025 14:09:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-883873
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 09 Nov 2025 14:08:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-755159
contexts:
- context:
cluster: cert-expiration-883873
extensions:
- extension:
last-update: Sun, 09 Nov 2025 14:09:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-883873
name: cert-expiration-883873
- context:
cluster: kubernetes-upgrade-755159
user: kubernetes-upgrade-755159
name: kubernetes-upgrade-755159
current-context: ""
kind: Config
users:
- name: cert-expiration-883873
user:
client-certificate: /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/cert-expiration-883873/client.crt
client-key: /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/cert-expiration-883873/client.key
- name: kubernetes-upgrade-755159
user:
client-certificate: /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/kubernetes-upgrade-755159/client.crt
client-key: /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/kubernetes-upgrade-755159/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-593530

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-593530"

                                                
                                                
----------------------- debugLogs end: false-593530 [took: 3.05408525s] --------------------------------
helpers_test.go:175: Cleaning up "false-593530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-593530
--- PASS: TestNetworkPlugins/group/false (3.37s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.45s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-092489 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-092489 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (8.441375434s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (48.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-169816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-169816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (48.153286175s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (48.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (49.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-152932 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-152932 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.534236351s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (49.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-169816 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b8660e9d-e2a4-48ea-806d-dbea8dc9c026] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b8660e9d-e2a4-48ea-806d-dbea8dc9c026] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.002478471s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-169816 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (15.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-169816 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-169816 --alsologtostderr -v=3: (15.979128406s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (15.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-152932 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [84072b78-3173-4704-8820-d187e9262dd9] Pending
helpers_test.go:352: "busybox" [84072b78-3173-4704-8820-d187e9262dd9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [84072b78-3173-4704-8820-d187e9262dd9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003127261s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-152932 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169816 -n old-k8s-version-169816
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169816 -n old-k8s-version-169816: exit status 7 (101.773636ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-169816 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-169816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-169816 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.903118923s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169816 -n old-k8s-version-169816
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-152932 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-152932 --alsologtostderr -v=3: (18.801740437s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (43.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-273180 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-273180 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (43.563709525s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (43.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-152932 -n no-preload-152932
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-152932 -n no-preload-152932: exit status 7 (84.374115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-152932 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (44.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-152932 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-152932 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (44.374188771s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-152932 -n no-preload-152932
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (44.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-326524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-326524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m13.289833381s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-273180 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e136284b-ac76-4b4f-ba01-633f83baa0e8] Pending
helpers_test.go:352: "busybox" [e136284b-ac76-4b4f-ba01-633f83baa0e8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e136284b-ac76-4b4f-ba01-633f83baa0e8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003385733s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-273180 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-v6s8t" [b40e7490-7646-4e1e-a89a-0936a3e8ca71] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003959698s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-v6s8t" [b40e7490-7646-4e1e-a89a-0936a3e8ca71] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003053678s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-169816 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-273180 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-273180 --alsologtostderr -v=3: (18.577101834s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-169816 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gcb5c" [b63bef34-a8fe-46c6-b524-40d9292214e9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002603685s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gcb5c" [b63bef34-a8fe-46c6-b524-40d9292214e9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002739314s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-152932 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (30.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-331530 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-331530 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (30.952094281s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (30.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-152932 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-273180 -n embed-certs-273180
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-273180 -n embed-certs-273180: exit status 7 (101.756512ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-273180 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (47.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-273180 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-273180 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (46.844472207s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-273180 -n embed-certs-273180
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (47.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (45.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (45.042093571s)
--- PASS: TestNetworkPlugins/group/auto/Start (45.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-331530 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-331530 --alsologtostderr -v=3: (12.55126548s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-331530 -n newest-cni-331530
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-331530 -n newest-cni-331530: exit status 7 (79.757521ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-331530 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-331530 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-331530 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.556642966s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-331530 -n newest-cni-331530
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-326524 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fc5f7a0f-3467-424e-a629-38217364cc98] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fc5f7a0f-3467-424e-a629-38217364cc98] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004266386s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-326524 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-p4m9s" [598ff345-5de9-4d94-8bfd-77c4df52c048] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004602206s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-331530 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-593530 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-593530 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t428k" [dad9d2d2-c399-43e8-8737-1add67dd1ea2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t428k" [dad9d2d2-c399-43e8-8737-1add67dd1ea2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003586208s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-p4m9s" [598ff345-5de9-4d94-8bfd-77c4df52c048] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003508478s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-273180 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (20.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-326524 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-326524 --alsologtostderr -v=3: (20.392365965s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (20.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-273180 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m11.737477812s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-593530 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-593530 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-593530 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (53.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (53.487864081s)
--- PASS: TestNetworkPlugins/group/calico/Start (53.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-326524 -n default-k8s-diff-port-326524
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-326524 -n default-k8s-diff-port-326524: exit status 7 (93.732066ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-326524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (44.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-326524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-326524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (43.859195179s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-326524 -n default-k8s-diff-port-326524
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (44.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (55.115327487s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-cfzqd" [abdce049-274b-4d8e-b0bb-1db69a7fd265] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003004029s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-m779r" [ff1cb5f8-6782-4149-bb4b-4650d8427294] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.002941749s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-cfzqd" [abdce049-274b-4d8e-b0bb-1db69a7fd265] Running
E1109 14:15:54.596588    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/addons-762402/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003507156s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-326524 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-593530 "pgrep -a kubelet"
I1109 14:15:56.422743    9365 config.go:182] Loaded profile config "calico-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-593530 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wlnpb" [47603f66-ec1f-43fe-8e3d-a7043ab77fd9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wlnpb" [47603f66-ec1f-43fe-8e3d-a7043ab77fd9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.00414254s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-326524 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-tcnpd" [074c5b01-c76a-4b19-9daf-0297d84e3ddf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003634116s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-593530 "pgrep -a kubelet"
I1109 14:16:04.570704    9365 config.go:182] Loaded profile config "custom-flannel-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-593530 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-z5wl6" [5c0369c7-4664-4761-ae25-85500c26df93] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-z5wl6" [5c0369c7-4664-4761-ae25-85500c26df93] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.002737968s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-593530 "pgrep -a kubelet"
I1109 14:16:05.570712    9365 config.go:182] Loaded profile config "kindnet-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-593530 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4wmpg" [8fea1675-ab17-48d9-b072-f244fdf8a6a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4wmpg" [8fea1675-ab17-48d9-b072-f244fdf8a6a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004016737s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-593530 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-593530 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-593530 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (63.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m3.547469933s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (63.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-593530 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-593530 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-593530 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-593530 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-593530 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-593530 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (50.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (50.204550197s)
--- PASS: TestNetworkPlugins/group/flannel/Start (50.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (65.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-593530 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m5.610558199s)
--- PASS: TestNetworkPlugins/group/bridge/Start (65.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-593530 "pgrep -a kubelet"
I1109 14:17:10.945996    9365 config.go:182] Loaded profile config "enable-default-cni-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-593530 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ppnst" [fedcaa8b-d1c3-4770-800c-a4badf4962ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1109 14:17:12.919149    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-ppnst" [fedcaa8b-d1c3-4770-800c-a4badf4962ee] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.00339825s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-l2t4j" [bb2c4e85-8309-43e5-85ef-07c93b06cc9b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.002958902s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-593530 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-593530 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-593530 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-593530 "pgrep -a kubelet"
I1109 14:17:23.537743    9365 config.go:182] Loaded profile config "flannel-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-593530 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9sltp" [f882ca4b-b2b7-4a5a-8e43-68ce311941b2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1109 14:17:25.181097    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-9sltp" [f882ca4b-b2b7-4a5a-8e43-68ce311941b2] Running
E1109 14:17:30.302899    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/no-preload-152932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.003935185s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-593530 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-593530 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-593530 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-593530 "pgrep -a kubelet"
E1109 14:17:43.642719    9365 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/old-k8s-version-169816/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1109 14:17:43.806357    9365 config.go:182] Loaded profile config "bridge-593530": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-593530 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dv5sx" [25ba33a6-6daf-42b6-bf70-ec827b579e06] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dv5sx" [25ba33a6-6daf-42b6-bf70-ec827b579e06] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003303759s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-593530 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-593530 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-593530 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                    

Test skip (27/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-565545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-565545
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-593530 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-593530

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-593530

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-593530

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-593530

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-593530

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-593530

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-593530

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-593530

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-593530

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-593530

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-593530

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-593530" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-593530" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 09 Nov 2025 14:09:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-883873
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 09 Nov 2025 14:08:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-755159
contexts:
- context:
cluster: cert-expiration-883873
extensions:
- extension:
last-update: Sun, 09 Nov 2025 14:09:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-883873
name: cert-expiration-883873
- context:
cluster: kubernetes-upgrade-755159
user: kubernetes-upgrade-755159
name: kubernetes-upgrade-755159
current-context: ""
kind: Config
users:
- name: cert-expiration-883873
user:
client-certificate: /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/cert-expiration-883873/client.crt
client-key: /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/cert-expiration-883873/client.key
- name: kubernetes-upgrade-755159
user:
client-certificate: /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/kubernetes-upgrade-755159/client.crt
client-key: /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/kubernetes-upgrade-755159/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-593530

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-593530"

                                                
                                                
----------------------- debugLogs end: kubenet-593530 [took: 3.233073212s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-593530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-593530
--- SKIP: TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-593530 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-593530

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-593530

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-593530

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-593530

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-593530

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-593530

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-593530

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-593530

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-593530

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-593530

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-593530

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-593530" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-593530

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-593530

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-593530

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-593530

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-593530" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-593530" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 09 Nov 2025 14:09:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-883873
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21139-5854/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 09 Nov 2025 14:08:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-755159
contexts:
- context:
cluster: cert-expiration-883873
extensions:
- extension:
last-update: Sun, 09 Nov 2025 14:09:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-883873
name: cert-expiration-883873
- context:
cluster: kubernetes-upgrade-755159
user: kubernetes-upgrade-755159
name: kubernetes-upgrade-755159
current-context: ""
kind: Config
users:
- name: cert-expiration-883873
user:
client-certificate: /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/cert-expiration-883873/client.crt
client-key: /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/cert-expiration-883873/client.key
- name: kubernetes-upgrade-755159
user:
client-certificate: /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/kubernetes-upgrade-755159/client.crt
client-key: /home/jenkins/minikube-integration/21139-5854/.minikube/profiles/kubernetes-upgrade-755159/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-593530

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-593530" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-593530"

                                                
                                                
----------------------- debugLogs end: cilium-593530 [took: 3.556732251s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-593530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-593530
--- SKIP: TestNetworkPlugins/group/cilium (3.74s)

                                                
                                    
Copied to clipboard